Test Report: Docker_Linux_containerd_arm64 22112

                    
                      236742b414df344dfb04283ee96fef673bd34cb2:2025-12-12:42745
                    
                

Test fail (25/369)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.85
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 367.93
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.23
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.4
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.26
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.05
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.12
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.05
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.74
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.06
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.3
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.62
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.4
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.1
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 127.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.3
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.2
358 TestKubernetesUpgrade 804.05
486 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.077
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1212 19:42:22.896919    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:50.610272    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.906462    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.912848    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.924321    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.945726    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.987203    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:52.068736    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:52.230235    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:52.551942    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:53.194114    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:54.475719    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:57.037136    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:02.159132    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:12.401034    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:32.882384    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:46:13.844351    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.896844    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:35.768013    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.444277766s)

                                                
                                                
-- stdout --
	* [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Found network options:
	  - HTTP_PROXY=localhost:46339
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:46339 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000209188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214076s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214076s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 6 (325.90184ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 19:49:05.040907   48147 status.go:458] kubeconfig endpoint: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-008271 ssh sudo cat /etc/ssl/certs/41202.pem                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh sudo cat /usr/share/ca-certificates/41202.pem                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image save kicbase/echo-server:functional-008271 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image rm kicbase/echo-server:functional-008271 --alsologtostderr                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image save --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format short --alsologtostderr                                                                                                     │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format yaml --alsologtostderr                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format json --alsologtostderr                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format table --alsologtostderr                                                                                                     │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh pgrep buildkitd                                                                                                                           │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image          │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                          │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete         │ -p functional-008271                                                                                                                                            │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start          │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:40:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:40:44.310161   42701 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:40:44.310273   42701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:44.310277   42701 out.go:374] Setting ErrFile to fd 2...
	I1212 19:40:44.310281   42701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:44.310628   42701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:40:44.311115   42701 out.go:368] Setting JSON to false
	I1212 19:40:44.312242   42701 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1394,"bootTime":1765567051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:40:44.312304   42701 start.go:143] virtualization:  
	I1212 19:40:44.316422   42701 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:40:44.321012   42701 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:40:44.321108   42701 notify.go:221] Checking for updates...
	I1212 19:40:44.327985   42701 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:40:44.331143   42701 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:40:44.334255   42701 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:40:44.337345   42701 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:40:44.340453   42701 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:40:44.343550   42701 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:40:44.378127   42701 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:40:44.378238   42701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:40:44.442593   42701 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 19:40:44.433207397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:40:44.442683   42701 docker.go:319] overlay module found
	I1212 19:40:44.445954   42701 out.go:179] * Using the docker driver based on user configuration
	I1212 19:40:44.448927   42701 start.go:309] selected driver: docker
	I1212 19:40:44.448934   42701 start.go:927] validating driver "docker" against <nil>
	I1212 19:40:44.448946   42701 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:40:44.449638   42701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:40:44.504328   42701 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 19:40:44.494804233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:40:44.504490   42701 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:40:44.504702   42701 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:40:44.507653   42701 out.go:179] * Using Docker driver with root privileges
	I1212 19:40:44.510526   42701 cni.go:84] Creating CNI manager for ""
	I1212 19:40:44.510593   42701 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:40:44.510599   42701 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 19:40:44.510668   42701 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:40:44.515768   42701 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:40:44.518614   42701 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:40:44.521608   42701 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:40:44.524484   42701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:40:44.524520   42701 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:40:44.524528   42701 cache.go:65] Caching tarball of preloaded images
	I1212 19:40:44.524562   42701 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:40:44.524615   42701 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:40:44.524624   42701 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:40:44.524954   42701 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:40:44.524974   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json: {Name:mkc67cd233583856f1f5fc489517f02e18634395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:44.545471   42701 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:40:44.545481   42701 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:40:44.545499   42701 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:40:44.545533   42701 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:40:44.545642   42701 start.go:364] duration metric: took 96.063µs to acquireMachinesLock for "functional-384006"
	I1212 19:40:44.545665   42701 start.go:93] Provisioning new machine with config: &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 19:40:44.545726   42701 start.go:125] createHost starting for "" (driver="docker")
	I1212 19:40:44.549281   42701 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1212 19:40:44.549548   42701 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46339 to docker env.
	I1212 19:40:44.549576   42701 start.go:159] libmachine.API.Create for "functional-384006" (driver="docker")
	I1212 19:40:44.549596   42701 client.go:173] LocalClient.Create starting
	I1212 19:40:44.549656   42701 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem
	I1212 19:40:44.549687   42701 main.go:143] libmachine: Decoding PEM data...
	I1212 19:40:44.549700   42701 main.go:143] libmachine: Parsing certificate...
	I1212 19:40:44.549750   42701 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem
	I1212 19:40:44.549764   42701 main.go:143] libmachine: Decoding PEM data...
	I1212 19:40:44.549774   42701 main.go:143] libmachine: Parsing certificate...
	I1212 19:40:44.550119   42701 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 19:40:44.565682   42701 cli_runner.go:211] docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 19:40:44.565762   42701 network_create.go:284] running [docker network inspect functional-384006] to gather additional debugging logs...
	I1212 19:40:44.565776   42701 cli_runner.go:164] Run: docker network inspect functional-384006
	W1212 19:40:44.579634   42701 cli_runner.go:211] docker network inspect functional-384006 returned with exit code 1
	I1212 19:40:44.579661   42701 network_create.go:287] error running [docker network inspect functional-384006]: docker network inspect functional-384006: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-384006 not found
	I1212 19:40:44.579672   42701 network_create.go:289] output of [docker network inspect functional-384006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-384006 not found
	
	** /stderr **
	I1212 19:40:44.579758   42701 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:40:44.595724   42701 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193d020}
	I1212 19:40:44.595753   42701 network_create.go:124] attempt to create docker network functional-384006 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 19:40:44.595810   42701 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-384006 functional-384006
	I1212 19:40:44.648762   42701 network_create.go:108] docker network functional-384006 192.168.49.0/24 created
	I1212 19:40:44.648784   42701 kic.go:121] calculated static IP "192.168.49.2" for the "functional-384006" container
	I1212 19:40:44.648868   42701 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 19:40:44.667189   42701 cli_runner.go:164] Run: docker volume create functional-384006 --label name.minikube.sigs.k8s.io=functional-384006 --label created_by.minikube.sigs.k8s.io=true
	I1212 19:40:44.686404   42701 oci.go:103] Successfully created a docker volume functional-384006
	I1212 19:40:44.686484   42701 cli_runner.go:164] Run: docker run --rm --name functional-384006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-384006 --entrypoint /usr/bin/test -v functional-384006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 19:40:45.308553   42701 oci.go:107] Successfully prepared a docker volume functional-384006
	I1212 19:40:45.308616   42701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:40:45.308627   42701 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 19:40:45.308706   42701 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-384006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 19:40:49.339364   42701 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-384006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.030626381s)
	I1212 19:40:49.339385   42701 kic.go:203] duration metric: took 4.03075446s to extract preloaded images to volume ...
	W1212 19:40:49.339535   42701 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 19:40:49.339629   42701 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 19:40:49.399382   42701 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-384006 --name functional-384006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-384006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-384006 --network functional-384006 --ip 192.168.49.2 --volume functional-384006:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 19:40:49.691529   42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Running}}
	I1212 19:40:49.713597   42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:40:49.741772   42701 cli_runner.go:164] Run: docker exec functional-384006 stat /var/lib/dpkg/alternatives/iptables
	I1212 19:40:49.785019   42701 oci.go:144] the created container "functional-384006" has a running status.
	I1212 19:40:49.785038   42701 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa...
	I1212 19:40:50.030738   42701 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 19:40:50.071006   42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:40:50.092909   42701 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 19:40:50.092920   42701 kic_runner.go:114] Args: [docker exec --privileged functional-384006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 19:40:50.145932   42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:40:50.179366   42701 machine.go:94] provisionDockerMachine start ...
	I1212 19:40:50.179482   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:50.201552   42701 main.go:143] libmachine: Using SSH client type: native
	I1212 19:40:50.201876   42701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:40:50.201884   42701 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:40:50.202609   42701 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 19:40:53.359592   42701 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:40:53.359605   42701 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:40:53.359667   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:53.378349   42701 main.go:143] libmachine: Using SSH client type: native
	I1212 19:40:53.378706   42701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:40:53.378719   42701 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:40:53.541260   42701 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:40:53.541326   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:53.561542   42701 main.go:143] libmachine: Using SSH client type: native
	I1212 19:40:53.561847   42701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:40:53.561860   42701 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:40:53.712504   42701 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:40:53.712518   42701 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:40:53.712537   42701 ubuntu.go:190] setting up certificates
	I1212 19:40:53.712546   42701 provision.go:84] configureAuth start
	I1212 19:40:53.712603   42701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:40:53.730838   42701 provision.go:143] copyHostCerts
	I1212 19:40:53.730905   42701 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:40:53.730912   42701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:40:53.730987   42701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:40:53.731074   42701 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:40:53.731077   42701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:40:53.731107   42701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:40:53.731188   42701 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:40:53.731191   42701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:40:53.731219   42701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:40:53.731261   42701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:40:53.985720   42701 provision.go:177] copyRemoteCerts
	I1212 19:40:53.985776   42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:40:53.985816   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:54.002922   42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:40:54.115669   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:40:54.133018   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:40:54.150638   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:40:54.167630   42701 provision.go:87] duration metric: took 455.071631ms to configureAuth
	I1212 19:40:54.167647   42701 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:40:54.167826   42701 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:40:54.167832   42701 machine.go:97] duration metric: took 3.98845611s to provisionDockerMachine
	I1212 19:40:54.167926   42701 client.go:176] duration metric: took 9.61832436s to LocalClient.Create
	I1212 19:40:54.167942   42701 start.go:167] duration metric: took 9.618369487s to libmachine.API.Create "functional-384006"
	I1212 19:40:54.167948   42701 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:40:54.167957   42701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:40:54.168014   42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:40:54.168049   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:54.184492   42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:40:54.292188   42701 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:40:54.295686   42701 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:40:54.295703   42701 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:40:54.295713   42701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:40:54.295768   42701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:40:54.295875   42701 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:40:54.295955   42701 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:40:54.296004   42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:40:54.303563   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:40:54.321914   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:40:54.339505   42701 start.go:296] duration metric: took 171.544506ms for postStartSetup
	I1212 19:40:54.339885   42701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:40:54.356827   42701 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:40:54.357090   42701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:40:54.357127   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:54.373329   42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:40:54.476716   42701 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:40:54.482048   42701 start.go:128] duration metric: took 9.936309777s to createHost
	I1212 19:40:54.482063   42701 start.go:83] releasing machines lock for "functional-384006", held for 9.936414126s
	I1212 19:40:54.482141   42701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:40:54.502216   42701 out.go:179] * Found network options:
	I1212 19:40:54.505144   42701 out.go:179]   - HTTP_PROXY=localhost:46339
	W1212 19:40:54.508027   42701 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1212 19:40:54.510991   42701 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1212 19:40:54.514023   42701 ssh_runner.go:195] Run: cat /version.json
	I1212 19:40:54.514073   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:54.514113   42701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:40:54.514164   42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:40:54.532511   42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:40:54.534156   42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:40:54.635420   42701 ssh_runner.go:195] Run: systemctl --version
	I1212 19:40:54.729544   42701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:40:54.733676   42701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:40:54.733738   42701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:40:54.759295   42701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 19:40:54.759308   42701 start.go:496] detecting cgroup driver to use...
	I1212 19:40:54.759338   42701 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:40:54.759382   42701 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:40:54.774145   42701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:40:54.786499   42701 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:40:54.786548   42701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:40:54.803147   42701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:40:54.820882   42701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:40:54.938578   42701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:40:55.057426   42701 docker.go:234] disabling docker service ...
	I1212 19:40:55.057500   42701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:40:55.092468   42701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:40:55.109954   42701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:40:55.273280   42701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:40:55.382582   42701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:40:55.396029   42701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:40:55.410031   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:40:55.418655   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:40:55.427275   42701 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:40:55.427340   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:40:55.436177   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:40:55.444614   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:40:55.453070   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:40:55.461686   42701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:40:55.469786   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:40:55.478293   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:40:55.486712   42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:40:55.495711   42701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:40:55.503162   42701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:40:55.510493   42701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:40:55.632911   42701 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:40:55.766297   42701 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:40:55.766387   42701 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:40:55.770388   42701 start.go:564] Will wait 60s for crictl version
	I1212 19:40:55.770441   42701 ssh_runner.go:195] Run: which crictl
	I1212 19:40:55.774491   42701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:40:55.799148   42701 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:40:55.799222   42701 ssh_runner.go:195] Run: containerd --version
	I1212 19:40:55.818936   42701 ssh_runner.go:195] Run: containerd --version
	I1212 19:40:55.846395   42701 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:40:55.849236   42701 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:40:55.865344   42701 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:40:55.869370   42701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:40:55.878997   42701 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:40:55.879103   42701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:40:55.879161   42701 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:40:55.903551   42701 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:40:55.903561   42701 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:40:55.903621   42701 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:40:55.929106   42701 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:40:55.929117   42701 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:40:55.929124   42701 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:40:55.929254   42701 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:40:55.929318   42701 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:40:55.954490   42701 cni.go:84] Creating CNI manager for ""
	I1212 19:40:55.954500   42701 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:40:55.954521   42701 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:40:55.954541   42701 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:40:55.954644   42701 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:40:55.954707   42701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:40:55.962641   42701 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:40:55.962697   42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:40:55.970323   42701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:40:55.982777   42701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:40:55.995589   42701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 19:40:56.011018   42701 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:40:56.015555   42701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:40:56.025946   42701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:40:56.153914   42701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:40:56.173527   42701 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:40:56.173536   42701 certs.go:195] generating shared ca certs ...
	I1212 19:40:56.173570   42701 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:56.173724   42701 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:40:56.173769   42701 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:40:56.173775   42701 certs.go:257] generating profile certs ...
	I1212 19:40:56.173828   42701 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:40:56.173844   42701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt with IP's: []
	I1212 19:40:56.532866   42701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt ...
	I1212 19:40:56.532887   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: {Name:mkfc9e34b0f1c99d91593dc19a049aba37bdd405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:56.533085   42701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key ...
	I1212 19:40:56.533092   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key: {Name:mkb01f552e965f3b10de445d0acb1cf236c8c366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:56.533179   42701 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:40:56.533191   42701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 19:40:56.761942   42701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b ...
	I1212 19:40:56.761960   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b: {Name:mk6291efbc5837d9af5a2a86e1048ea5beaa00e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:56.762146   42701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b ...
	I1212 19:40:56.762154   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b: {Name:mkc6b03f5b97f234184e0c9a10a5beb4e3f40854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:56.762248   42701 certs.go:382] copying /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b -> /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt
	I1212 19:40:56.762322   42701 certs.go:386] copying /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b -> /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key
	I1212 19:40:56.762375   42701 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:40:56.762387   42701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt with IP's: []
	I1212 19:40:57.111819   42701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt ...
	I1212 19:40:57.111845   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt: {Name:mk61b30049c330c3b79396114753a2e33f49d208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:57.112053   42701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key ...
	I1212 19:40:57.112062   42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key: {Name:mk60a8c5379d01c0c15152b95a875b9b71f78bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:40:57.112281   42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:40:57.112330   42701 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:40:57.112339   42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:40:57.112379   42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:40:57.112406   42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:40:57.112429   42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:40:57.112489   42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:40:57.113091   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:40:57.131070   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:40:57.150569   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:40:57.168806   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:40:57.186548   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:40:57.204581   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:40:57.223308   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:40:57.241998   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:40:57.259628   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:40:57.277661   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:40:57.295169   42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:40:57.312863   42701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:40:57.325510   42701 ssh_runner.go:195] Run: openssl version
	I1212 19:40:57.332523   42701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:40:57.340004   42701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:40:57.347406   42701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:40:57.351185   42701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:40:57.351240   42701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:40:57.392065   42701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:40:57.399583   42701 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 19:40:57.406996   42701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:40:57.414506   42701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:40:57.422335   42701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:40:57.426019   42701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:40:57.426071   42701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:40:57.467038   42701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:40:57.474748   42701 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1212 19:40:57.482206   42701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:40:57.489668   42701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:40:57.497440   42701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:40:57.501261   42701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:40:57.501316   42701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:40:57.542356   42701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:40:57.549976   42701 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1212 19:40:57.557405   42701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:40:57.560910   42701 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 19:40:57.560952   42701 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:40:57.561023   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:40:57.561086   42701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:40:57.590406   42701 cri.go:89] found id: ""
	I1212 19:40:57.590467   42701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:40:57.598052   42701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:40:57.605581   42701 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:40:57.605640   42701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:40:57.613371   42701 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:40:57.613380   42701 kubeadm.go:158] found existing configuration files:
	
	I1212 19:40:57.613445   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:40:57.621043   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:40:57.621095   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:40:57.628172   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:40:57.635356   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:40:57.635410   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:40:57.642343   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:40:57.649795   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:40:57.649850   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:40:57.657122   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:40:57.664772   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:40:57.664828   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:40:57.672341   42701 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:40:57.733985   42701 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:40:57.734474   42701 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:40:57.814083   42701 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:40:57.814149   42701 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 19:40:57.814183   42701 kubeadm.go:319] OS: Linux
	I1212 19:40:57.814227   42701 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:40:57.814274   42701 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:40:57.814321   42701 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:40:57.814372   42701 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:40:57.814419   42701 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:40:57.814469   42701 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:40:57.814514   42701 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:40:57.814577   42701 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:40:57.814622   42701 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:40:57.885000   42701 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:40:57.885127   42701 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:40:57.885238   42701 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:40:57.896539   42701 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:40:57.903138   42701 out.go:252]   - Generating certificates and keys ...
	I1212 19:40:57.903228   42701 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:40:57.903298   42701 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:40:58.467454   42701 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 19:40:59.248213   42701 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 19:40:59.476271   42701 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 19:40:59.725706   42701 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 19:41:00.097009   42701 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 19:41:00.097142   42701 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:41:00.200942   42701 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 19:41:00.201127   42701 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 19:41:00.321004   42701 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 19:41:00.473525   42701 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 19:41:00.654531   42701 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 19:41:00.654906   42701 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:41:00.789773   42701 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:41:00.880064   42701 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:41:01.176824   42701 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:41:01.485183   42701 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:41:01.536937   42701 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:41:01.537645   42701 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:41:01.540461   42701 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:41:01.544024   42701 out.go:252]   - Booting up control plane ...
	I1212 19:41:01.544119   42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:41:01.544200   42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:41:01.544271   42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:41:01.559376   42701 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:41:01.559477   42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:41:01.567380   42701 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:41:01.567830   42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:41:01.568153   42701 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:41:01.692374   42701 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:41:01.692516   42701 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:45:01.691912   42701 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000209188s
	I1212 19:45:01.691940   42701 kubeadm.go:319] 
	I1212 19:45:01.692010   42701 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 19:45:01.692051   42701 kubeadm.go:319] 	- The kubelet is not running
	I1212 19:45:01.692686   42701 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 19:45:01.692702   42701 kubeadm.go:319] 
	I1212 19:45:01.693002   42701 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 19:45:01.693315   42701 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 19:45:01.693381   42701 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 19:45:01.693398   42701 kubeadm.go:319] 
	I1212 19:45:01.701221   42701 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 19:45:01.701624   42701 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 19:45:01.701726   42701 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:45:01.701973   42701 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 19:45:01.701978   42701 kubeadm.go:319] 
	I1212 19:45:01.702041   42701 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 19:45:01.702160   42701 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000209188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 19:45:01.702497   42701 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 19:45:02.120065   42701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:45:02.138262   42701 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:45:02.138316   42701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:45:02.146370   42701 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:45:02.146383   42701 kubeadm.go:158] found existing configuration files:
	
	I1212 19:45:02.146436   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:45:02.157210   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:45:02.157265   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:45:02.165330   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:45:02.173856   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:45:02.173917   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:45:02.183915   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:45:02.191989   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:45:02.192049   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:45:02.200029   42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:45:02.210376   42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:45:02.210433   42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:45:02.218868   42701 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:45:02.351777   42701 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 19:45:02.352255   42701 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 19:45:02.428320   42701 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:49:04.286204   42701 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 19:49:04.286222   42701 kubeadm.go:319] 
	I1212 19:49:04.286293   42701 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 19:49:04.290915   42701 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:49:04.290970   42701 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:49:04.291087   42701 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:49:04.291149   42701 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 19:49:04.291183   42701 kubeadm.go:319] OS: Linux
	I1212 19:49:04.291246   42701 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:49:04.291297   42701 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:49:04.291344   42701 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:49:04.291391   42701 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:49:04.291437   42701 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:49:04.291486   42701 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:49:04.291530   42701 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:49:04.291577   42701 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:49:04.291624   42701 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:49:04.291695   42701 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:49:04.291789   42701 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:49:04.291891   42701 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:49:04.291953   42701 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:49:04.294973   42701 out.go:252]   - Generating certificates and keys ...
	I1212 19:49:04.295045   42701 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:49:04.295113   42701 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:49:04.295197   42701 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 19:49:04.295257   42701 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 19:49:04.295325   42701 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 19:49:04.295378   42701 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 19:49:04.295440   42701 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 19:49:04.295500   42701 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 19:49:04.295574   42701 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 19:49:04.295645   42701 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 19:49:04.295682   42701 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 19:49:04.295736   42701 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:49:04.295786   42701 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:49:04.295858   42701 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:49:04.295910   42701 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:49:04.295972   42701 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:49:04.296026   42701 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:49:04.296108   42701 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:49:04.296173   42701 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:49:04.299042   42701 out.go:252]   - Booting up control plane ...
	I1212 19:49:04.299151   42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:49:04.299227   42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:49:04.299296   42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:49:04.299403   42701 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:49:04.299495   42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:49:04.299628   42701 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:49:04.299730   42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:49:04.299772   42701 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:49:04.299968   42701 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:49:04.300085   42701 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:49:04.300151   42701 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001214076s
	I1212 19:49:04.300153   42701 kubeadm.go:319] 
	I1212 19:49:04.300215   42701 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 19:49:04.300255   42701 kubeadm.go:319] 	- The kubelet is not running
	I1212 19:49:04.300366   42701 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 19:49:04.300369   42701 kubeadm.go:319] 
	I1212 19:49:04.300480   42701 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 19:49:04.300511   42701 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 19:49:04.300541   42701 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 19:49:04.300595   42701 kubeadm.go:319] 
	I1212 19:49:04.300596   42701 kubeadm.go:403] duration metric: took 8m6.739647745s to StartCluster
	I1212 19:49:04.300625   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:49:04.300687   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:49:04.325747   42701 cri.go:89] found id: ""
	I1212 19:49:04.325762   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.325774   42701 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:49:04.325780   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:49:04.325854   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:49:04.357078   42701 cri.go:89] found id: ""
	I1212 19:49:04.357093   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.357100   42701 logs.go:284] No container was found matching "etcd"
	I1212 19:49:04.357105   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:49:04.357167   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:49:04.380493   42701 cri.go:89] found id: ""
	I1212 19:49:04.380508   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.380515   42701 logs.go:284] No container was found matching "coredns"
	I1212 19:49:04.380520   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:49:04.380581   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:49:04.403672   42701 cri.go:89] found id: ""
	I1212 19:49:04.403686   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.403693   42701 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:49:04.403698   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:49:04.403752   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:49:04.428821   42701 cri.go:89] found id: ""
	I1212 19:49:04.428834   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.428841   42701 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:49:04.428847   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:49:04.428902   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:49:04.456883   42701 cri.go:89] found id: ""
	I1212 19:49:04.456896   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.456904   42701 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:49:04.456909   42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:49:04.456964   42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:49:04.484233   42701 cri.go:89] found id: ""
	I1212 19:49:04.484246   42701 logs.go:282] 0 containers: []
	W1212 19:49:04.484253   42701 logs.go:284] No container was found matching "kindnet"
	I1212 19:49:04.484260   42701 logs.go:123] Gathering logs for kubelet ...
	I1212 19:49:04.484270   42701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:49:04.540591   42701 logs.go:123] Gathering logs for dmesg ...
	I1212 19:49:04.540608   42701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:49:04.551242   42701 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:49:04.551260   42701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:49:04.617266   42701 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:49:04.608747    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.609239    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.610858    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.611353    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.613012    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:49:04.608747    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.609239    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.610858    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.611353    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:04.613012    4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:49:04.617276   42701 logs.go:123] Gathering logs for containerd ...
	I1212 19:49:04.617287   42701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:49:04.653829   42701 logs.go:123] Gathering logs for container status ...
	I1212 19:49:04.653847   42701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 19:49:04.679477   42701 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214076s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 19:49:04.679515   42701 out.go:285] * 
	W1212 19:49:04.679572   42701 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214076s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 19:49:04.679812   42701 out.go:285] * 
	W1212 19:49:04.682301   42701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:49:04.688334   42701 out.go:203] 
	W1212 19:49:04.691068   42701 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214076s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 19:49:04.691131   42701 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 19:49:04.691154   42701 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 19:49:04.694294   42701 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709459179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709520478Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709639309Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709713875Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709773197Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709835357Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709890494Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709947608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.710012640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.710102837Z" level=info msg="Connect containerd service"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.710479975Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.711194154Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.726667191Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.726904075Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.726846148Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.727211455Z" level=info msg="Start recovering state"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763003138Z" level=info msg="Start event monitor"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763202673Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763269945Z" level=info msg="Start streaming server"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763341270Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763399033Z" level=info msg="runtime interface starting up..."
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763452496Z" level=info msg="starting plugins..."
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763514426Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:40:55 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.766218253Z" level=info msg="containerd successfully booted in 0.081302s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:49:05.646418    4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:05.646997    4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:05.648663    4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:05.649137    4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:49:05.650689    4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 19:49:05 up 31 min,  0 user,  load average: 0.04, 0.45, 0.74
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 19:49:02 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:02 functional-384006 kubelet[4668]: E1212 19:49:02.740368    4668 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:49:02 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:49:02 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:49:03 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 19:49:03 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:03 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:03 functional-384006 kubelet[4673]: E1212 19:49:03.489009    4673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:49:03 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:49:03 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 19:49:04 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:04 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:04 functional-384006 kubelet[4678]: E1212 19:49:04.240085    4678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 19:49:04 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:04 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:05 functional-384006 kubelet[4778]: E1212 19:49:05.004346    4778 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:49:05 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:49:05 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:49:05 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 19:49:05 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:49:05 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 6 (326.120732ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 19:49:06.099155   48365 status.go:458] kubeconfig endpoint: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (367.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 19:49:06.114314    4120 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-384006 --alsologtostderr -v=8
E1212 19:49:51.906698    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:50:19.609615    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:22.896622    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:53:45.972650    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:54:51.907458    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-384006 --alsologtostderr -v=8: exit status 80 (6m5.1033287s)

                                                
                                                
-- stdout --
	* [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:49:06.161667   48438 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:49:06.161882   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.161913   48438 out.go:374] Setting ErrFile to fd 2...
	I1212 19:49:06.161935   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.162192   48438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:49:06.162605   48438 out.go:368] Setting JSON to false
	I1212 19:49:06.163501   48438 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1896,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:49:06.163603   48438 start.go:143] virtualization:  
	I1212 19:49:06.167059   48438 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:49:06.170023   48438 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:49:06.170127   48438 notify.go:221] Checking for updates...
	I1212 19:49:06.175791   48438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:49:06.178620   48438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:06.181479   48438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:49:06.184334   48438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:49:06.187177   48438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:49:06.190472   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:06.190582   48438 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:49:06.226589   48438 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:49:06.226705   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.287038   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.278380602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.287144   48438 docker.go:319] overlay module found
	I1212 19:49:06.290214   48438 out.go:179] * Using the docker driver based on existing profile
	I1212 19:49:06.293103   48438 start.go:309] selected driver: docker
	I1212 19:49:06.293122   48438 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.293257   48438 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:49:06.293353   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.346602   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.338111982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.347001   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:06.347058   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:06.347109   48438 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.350199   48438 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:49:06.353090   48438 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:49:06.356052   48438 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:49:06.358945   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:06.359005   48438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:49:06.359039   48438 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:49:06.359049   48438 cache.go:65] Caching tarball of preloaded images
	I1212 19:49:06.359132   48438 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:49:06.359143   48438 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:49:06.359246   48438 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:49:06.377622   48438 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:49:06.377646   48438 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:49:06.377660   48438 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:49:06.377689   48438 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:49:06.377751   48438 start.go:364] duration metric: took 39.285µs to acquireMachinesLock for "functional-384006"
	I1212 19:49:06.377774   48438 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:49:06.377781   48438 fix.go:54] fixHost starting: 
	I1212 19:49:06.378037   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:06.394046   48438 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:49:06.394073   48438 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:49:06.397347   48438 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:49:06.397378   48438 machine.go:94] provisionDockerMachine start ...
	I1212 19:49:06.397470   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.413547   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.413876   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.413891   48438 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:49:06.567084   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.567107   48438 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:49:06.567205   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.584099   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.584405   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.584422   48438 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:49:06.744613   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.744691   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.765941   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.766253   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.766274   48438 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:49:06.919909   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:49:06.919937   48438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:49:06.919964   48438 ubuntu.go:190] setting up certificates
	I1212 19:49:06.919986   48438 provision.go:84] configureAuth start
	I1212 19:49:06.920046   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:06.936937   48438 provision.go:143] copyHostCerts
	I1212 19:49:06.936980   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937022   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:49:06.937035   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937107   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:49:06.937204   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937227   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:49:06.937232   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937260   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:49:06.937320   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937341   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:49:06.937354   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937380   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:49:06.937435   48438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:49:07.142288   48438 provision.go:177] copyRemoteCerts
	I1212 19:49:07.142366   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:49:07.142409   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.158934   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.267886   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 19:49:07.267945   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:49:07.284419   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 19:49:07.284477   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:49:07.301465   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 19:49:07.301546   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:49:07.318717   48438 provision.go:87] duration metric: took 398.706755ms to configureAuth
	I1212 19:49:07.318790   48438 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:49:07.319006   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:07.319035   48438 machine.go:97] duration metric: took 921.650297ms to provisionDockerMachine
	I1212 19:49:07.319058   48438 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:49:07.319080   48438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:49:07.319173   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:49:07.319238   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.336520   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.439884   48438 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:49:07.443234   48438 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:49:07.443254   48438 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:49:07.443259   48438 command_runner.go:130] > VERSION_ID="12"
	I1212 19:49:07.443263   48438 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:49:07.443268   48438 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:49:07.443272   48438 command_runner.go:130] > ID=debian
	I1212 19:49:07.443276   48438 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:49:07.443281   48438 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:49:07.443289   48438 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:49:07.443341   48438 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:49:07.443361   48438 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:49:07.443371   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:49:07.443421   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:49:07.443503   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:49:07.443510   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1212 19:49:07.443585   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:49:07.443589   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1212 19:49:07.443629   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:49:07.450818   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:07.468474   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:49:07.485034   48438 start.go:296] duration metric: took 165.952143ms for postStartSetup
	I1212 19:49:07.485111   48438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:49:07.485180   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.502057   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.604226   48438 command_runner.go:130] > 12%
	I1212 19:49:07.604746   48438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:49:07.609551   48438 command_runner.go:130] > 172G
	I1212 19:49:07.609593   48438 fix.go:56] duration metric: took 1.231809331s for fixHost
	I1212 19:49:07.609604   48438 start.go:83] releasing machines lock for "functional-384006", held for 1.231841888s
	I1212 19:49:07.609687   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:07.626230   48438 ssh_runner.go:195] Run: cat /version.json
	I1212 19:49:07.626285   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.626592   48438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:49:07.626649   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.648515   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.651511   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.751468   48438 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:49:07.751688   48438 ssh_runner.go:195] Run: systemctl --version
	I1212 19:49:07.840262   48438 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 19:49:07.843071   48438 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:49:07.843106   48438 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:49:07.843235   48438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:49:07.847707   48438 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:49:07.847791   48438 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:49:07.847870   48438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:49:07.855348   48438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:49:07.855380   48438 start.go:496] detecting cgroup driver to use...
	I1212 19:49:07.855411   48438 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:07.855473   48438 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:49:07.872745   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:49:07.888438   48438 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:49:07.888499   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:49:07.905328   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:49:07.922378   48438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:49:08.040559   48438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:49:08.153632   48438 docker.go:234] disabling docker service ...
	I1212 19:49:08.153749   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:49:08.170255   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:49:08.183563   48438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:49:08.296935   48438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:49:08.413119   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:49:08.425880   48438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:49:08.438681   48438 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:49:08.439732   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:49:08.448541   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:49:08.457430   48438 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:49:08.457506   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:49:08.466099   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.474729   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:49:08.483278   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.491712   48438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:49:08.499807   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:49:08.508171   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:49:08.517078   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:49:08.525348   48438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:49:08.531636   48438 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:49:08.532621   48438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:49:08.539615   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:08.670670   48438 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:49:08.806796   48438 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:49:08.806894   48438 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:49:08.810696   48438 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 19:49:08.810773   48438 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:49:08.810802   48438 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1212 19:49:08.810829   48438 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:08.810848   48438 command_runner.go:130] > Access: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810866   48438 command_runner.go:130] > Modify: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810881   48438 command_runner.go:130] > Change: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810904   48438 command_runner.go:130] >  Birth: -
	I1212 19:49:08.811086   48438 start.go:564] Will wait 60s for crictl version
	I1212 19:49:08.811174   48438 ssh_runner.go:195] Run: which crictl
	I1212 19:49:08.814485   48438 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:49:08.814611   48438 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:49:08.838884   48438 command_runner.go:130] > Version:  0.1.0
	I1212 19:49:08.838955   48438 command_runner.go:130] > RuntimeName:  containerd
	I1212 19:49:08.838976   48438 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 19:49:08.838997   48438 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:49:08.840776   48438 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:49:08.840864   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.863238   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.864954   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.884422   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.891508   48438 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:49:08.894468   48438 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:49:08.910430   48438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:49:08.914297   48438 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 19:49:08.914409   48438 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:49:08.914505   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:08.914560   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.938916   48438 command_runner.go:130] > {
	I1212 19:49:08.938935   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.938940   48438 command_runner.go:130] >     {
	I1212 19:49:08.938949   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.938953   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.938959   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.938962   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938967   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.938980   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.938983   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938988   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.938991   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.938995   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.938998   48438 command_runner.go:130] >     },
	I1212 19:49:08.939001   48438 command_runner.go:130] >     {
	I1212 19:49:08.939009   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.939013   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939018   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.939022   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939034   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.939038   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.939049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939053   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939056   48438 command_runner.go:130] >     },
	I1212 19:49:08.939059   48438 command_runner.go:130] >     {
	I1212 19:49:08.939066   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.939069   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939075   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.939078   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939084   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939091   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.939095   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939100   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.939104   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.939108   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939111   48438 command_runner.go:130] >     },
	I1212 19:49:08.939115   48438 command_runner.go:130] >     {
	I1212 19:49:08.939121   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.939125   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939130   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.939133   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939137   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939154   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.939157   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939161   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.939166   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939170   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939173   48438 command_runner.go:130] >       },
	I1212 19:49:08.939177   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939181   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939184   48438 command_runner.go:130] >     },
	I1212 19:49:08.939187   48438 command_runner.go:130] >     {
	I1212 19:49:08.939193   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.939200   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939206   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.939209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939213   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939220   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.939224   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939228   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.939231   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939241   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939244   48438 command_runner.go:130] >       },
	I1212 19:49:08.939248   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939252   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939254   48438 command_runner.go:130] >     },
	I1212 19:49:08.939257   48438 command_runner.go:130] >     {
	I1212 19:49:08.939264   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.939268   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939273   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.939276   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939280   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939288   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.939291   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939295   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.939299   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939302   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939305   48438 command_runner.go:130] >       },
	I1212 19:49:08.939309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939316   48438 command_runner.go:130] >     },
	I1212 19:49:08.939319   48438 command_runner.go:130] >     {
	I1212 19:49:08.939326   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.939330   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939334   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.939338   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939345   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939353   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.939356   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939360   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.939364   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939368   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939370   48438 command_runner.go:130] >     },
	I1212 19:49:08.939375   48438 command_runner.go:130] >     {
	I1212 19:49:08.939381   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.939385   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939390   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.939393   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939397   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939405   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.939408   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939412   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.939416   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939420   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939423   48438 command_runner.go:130] >       },
	I1212 19:49:08.939427   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939430   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939433   48438 command_runner.go:130] >     },
	I1212 19:49:08.939437   48438 command_runner.go:130] >     {
	I1212 19:49:08.939443   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.939447   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939452   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.939454   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939458   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939465   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.939469   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939473   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.939476   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939480   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.939486   48438 command_runner.go:130] >       },
	I1212 19:49:08.939490   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939493   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.939496   48438 command_runner.go:130] >     }
	I1212 19:49:08.939499   48438 command_runner.go:130] >   ]
	I1212 19:49:08.939502   48438 command_runner.go:130] > }
	I1212 19:49:08.940984   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.941004   48438 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:49:08.941060   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.962883   48438 command_runner.go:130] > {
	I1212 19:49:08.962905   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.962910   48438 command_runner.go:130] >     {
	I1212 19:49:08.962919   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.962924   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.962930   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.962934   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962938   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.962948   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.962955   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962964   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.962971   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.962975   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.962985   48438 command_runner.go:130] >     },
	I1212 19:49:08.962993   48438 command_runner.go:130] >     {
	I1212 19:49:08.963005   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.963012   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963017   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.963021   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963035   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.963040   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.963049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963055   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963058   48438 command_runner.go:130] >     },
	I1212 19:49:08.963064   48438 command_runner.go:130] >     {
	I1212 19:49:08.963071   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.963081   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963086   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.963090   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963104   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963113   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.963116   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963123   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.963127   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.963132   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963137   48438 command_runner.go:130] >     },
	I1212 19:49:08.963146   48438 command_runner.go:130] >     {
	I1212 19:49:08.963157   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.963170   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963175   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.963178   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963187   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963198   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.963201   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963210   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.963214   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963221   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963224   48438 command_runner.go:130] >       },
	I1212 19:49:08.963228   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963234   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963238   48438 command_runner.go:130] >     },
	I1212 19:49:08.963241   48438 command_runner.go:130] >     {
	I1212 19:49:08.963248   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.963255   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963260   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.963263   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963266   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963274   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.963281   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963285   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.963288   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963298   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963302   48438 command_runner.go:130] >       },
	I1212 19:49:08.963309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963319   48438 command_runner.go:130] >     },
	I1212 19:49:08.963322   48438 command_runner.go:130] >     {
	I1212 19:49:08.963329   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.963336   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963341   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.963344   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963348   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963356   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.963363   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963367   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.963370   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963374   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963382   48438 command_runner.go:130] >       },
	I1212 19:49:08.963389   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963393   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963396   48438 command_runner.go:130] >     },
	I1212 19:49:08.963399   48438 command_runner.go:130] >     {
	I1212 19:49:08.963406   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.963413   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963418   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.963421   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963425   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963433   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.963440   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963444   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.963448   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963452   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963455   48438 command_runner.go:130] >     },
	I1212 19:49:08.963458   48438 command_runner.go:130] >     {
	I1212 19:49:08.963465   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.963472   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963478   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.963483   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963487   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963498   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.963503   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963509   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.963515   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963518   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963521   48438 command_runner.go:130] >       },
	I1212 19:49:08.963525   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963529   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963534   48438 command_runner.go:130] >     },
	I1212 19:49:08.963537   48438 command_runner.go:130] >     {
	I1212 19:49:08.963547   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.963555   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963560   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.963566   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963570   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963580   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.963587   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963591   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.963594   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963598   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.963604   48438 command_runner.go:130] >       },
	I1212 19:49:08.963611   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963615   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.963618   48438 command_runner.go:130] >     }
	I1212 19:49:08.963621   48438 command_runner.go:130] >   ]
	I1212 19:49:08.963624   48438 command_runner.go:130] > }
	I1212 19:49:08.965735   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.965756   48438 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:49:08.965764   48438 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:49:08.965868   48438 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:49:08.965936   48438 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:49:08.990907   48438 command_runner.go:130] > {
	I1212 19:49:08.990927   48438 command_runner.go:130] >   "cniconfig": {
	I1212 19:49:08.990932   48438 command_runner.go:130] >     "Networks": [
	I1212 19:49:08.990936   48438 command_runner.go:130] >       {
	I1212 19:49:08.990942   48438 command_runner.go:130] >         "Config": {
	I1212 19:49:08.990947   48438 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 19:49:08.990980   48438 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 19:49:08.990997   48438 command_runner.go:130] >           "Plugins": [
	I1212 19:49:08.991002   48438 command_runner.go:130] >             {
	I1212 19:49:08.991010   48438 command_runner.go:130] >               "Network": {
	I1212 19:49:08.991014   48438 command_runner.go:130] >                 "ipam": {},
	I1212 19:49:08.991020   48438 command_runner.go:130] >                 "type": "loopback"
	I1212 19:49:08.991023   48438 command_runner.go:130] >               },
	I1212 19:49:08.991033   48438 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 19:49:08.991041   48438 command_runner.go:130] >             }
	I1212 19:49:08.991063   48438 command_runner.go:130] >           ],
	I1212 19:49:08.991073   48438 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 19:49:08.991080   48438 command_runner.go:130] >         },
	I1212 19:49:08.991089   48438 command_runner.go:130] >         "IFName": "lo"
	I1212 19:49:08.991095   48438 command_runner.go:130] >       }
	I1212 19:49:08.991098   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991103   48438 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 19:49:08.991106   48438 command_runner.go:130] >     "PluginDirs": [
	I1212 19:49:08.991109   48438 command_runner.go:130] >       "/opt/cni/bin"
	I1212 19:49:08.991113   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991117   48438 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 19:49:08.991135   48438 command_runner.go:130] >     "Prefix": "eth"
	I1212 19:49:08.991151   48438 command_runner.go:130] >   },
	I1212 19:49:08.991154   48438 command_runner.go:130] >   "config": {
	I1212 19:49:08.991158   48438 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 19:49:08.991171   48438 command_runner.go:130] >       "/etc/cdi",
	I1212 19:49:08.991184   48438 command_runner.go:130] >       "/var/run/cdi"
	I1212 19:49:08.991188   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991191   48438 command_runner.go:130] >     "cni": {
	I1212 19:49:08.991195   48438 command_runner.go:130] >       "binDir": "",
	I1212 19:49:08.991202   48438 command_runner.go:130] >       "binDirs": [
	I1212 19:49:08.991206   48438 command_runner.go:130] >         "/opt/cni/bin"
	I1212 19:49:08.991209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.991216   48438 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 19:49:08.991220   48438 command_runner.go:130] >       "confTemplate": "",
	I1212 19:49:08.991224   48438 command_runner.go:130] >       "ipPref": "",
	I1212 19:49:08.991227   48438 command_runner.go:130] >       "maxConfNum": 1,
	I1212 19:49:08.991231   48438 command_runner.go:130] >       "setupSerially": false,
	I1212 19:49:08.991235   48438 command_runner.go:130] >       "useInternalLoopback": false
	I1212 19:49:08.991248   48438 command_runner.go:130] >     },
	I1212 19:49:08.991264   48438 command_runner.go:130] >     "containerd": {
	I1212 19:49:08.991273   48438 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 19:49:08.991288   48438 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 19:49:08.991302   48438 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 19:49:08.991311   48438 command_runner.go:130] >       "runtimes": {
	I1212 19:49:08.991317   48438 command_runner.go:130] >         "runc": {
	I1212 19:49:08.991321   48438 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 19:49:08.991325   48438 command_runner.go:130] >           "PodAnnotations": null,
	I1212 19:49:08.991329   48438 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 19:49:08.991340   48438 command_runner.go:130] >           "cgroupWritable": false,
	I1212 19:49:08.991344   48438 command_runner.go:130] >           "cniConfDir": "",
	I1212 19:49:08.991347   48438 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 19:49:08.991351   48438 command_runner.go:130] >           "io_type": "",
	I1212 19:49:08.991366   48438 command_runner.go:130] >           "options": {
	I1212 19:49:08.991378   48438 command_runner.go:130] >             "BinaryName": "",
	I1212 19:49:08.991382   48438 command_runner.go:130] >             "CriuImagePath": "",
	I1212 19:49:08.991386   48438 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 19:49:08.991400   48438 command_runner.go:130] >             "IoGid": 0,
	I1212 19:49:08.991410   48438 command_runner.go:130] >             "IoUid": 0,
	I1212 19:49:08.991414   48438 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 19:49:08.991418   48438 command_runner.go:130] >             "Root": "",
	I1212 19:49:08.991422   48438 command_runner.go:130] >             "ShimCgroup": "",
	I1212 19:49:08.991427   48438 command_runner.go:130] >             "SystemdCgroup": false
	I1212 19:49:08.991433   48438 command_runner.go:130] >           },
	I1212 19:49:08.991439   48438 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 19:49:08.991455   48438 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 19:49:08.991461   48438 command_runner.go:130] >           "runtimePath": "",
	I1212 19:49:08.991476   48438 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 19:49:08.991487   48438 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 19:49:08.991491   48438 command_runner.go:130] >           "snapshotter": ""
	I1212 19:49:08.991503   48438 command_runner.go:130] >         }
	I1212 19:49:08.991510   48438 command_runner.go:130] >       }
	I1212 19:49:08.991513   48438 command_runner.go:130] >     },
	I1212 19:49:08.991525   48438 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 19:49:08.991540   48438 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 19:49:08.991547   48438 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 19:49:08.991554   48438 command_runner.go:130] >     "disableApparmor": false,
	I1212 19:49:08.991559   48438 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 19:49:08.991564   48438 command_runner.go:130] >     "disableProcMount": false,
	I1212 19:49:08.991583   48438 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 19:49:08.991588   48438 command_runner.go:130] >     "enableCDI": true,
	I1212 19:49:08.991603   48438 command_runner.go:130] >     "enableSelinux": false,
	I1212 19:49:08.991616   48438 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 19:49:08.991621   48438 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 19:49:08.991627   48438 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 19:49:08.991634   48438 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 19:49:08.991639   48438 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 19:49:08.991643   48438 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 19:49:08.991653   48438 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 19:49:08.991658   48438 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991662   48438 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 19:49:08.991678   48438 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991689   48438 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 19:49:08.991694   48438 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 19:49:08.991696   48438 command_runner.go:130] >   },
	I1212 19:49:08.991700   48438 command_runner.go:130] >   "features": {
	I1212 19:49:08.991704   48438 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 19:49:08.991706   48438 command_runner.go:130] >   },
	I1212 19:49:08.991710   48438 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 19:49:08.991719   48438 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991728   48438 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991732   48438 command_runner.go:130] >   "runtimeHandlers": [
	I1212 19:49:08.991735   48438 command_runner.go:130] >     {
	I1212 19:49:08.991739   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991743   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991747   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991751   48438 command_runner.go:130] >       }
	I1212 19:49:08.991759   48438 command_runner.go:130] >     },
	I1212 19:49:08.991762   48438 command_runner.go:130] >     {
	I1212 19:49:08.991766   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991770   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991774   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991796   48438 command_runner.go:130] >       },
	I1212 19:49:08.991800   48438 command_runner.go:130] >       "name": "runc"
	I1212 19:49:08.991803   48438 command_runner.go:130] >     }
	I1212 19:49:08.991807   48438 command_runner.go:130] >   ],
	I1212 19:49:08.991875   48438 command_runner.go:130] >   "status": {
	I1212 19:49:08.991889   48438 command_runner.go:130] >     "conditions": [
	I1212 19:49:08.991892   48438 command_runner.go:130] >       {
	I1212 19:49:08.991895   48438 command_runner.go:130] >         "message": "",
	I1212 19:49:08.991899   48438 command_runner.go:130] >         "reason": "",
	I1212 19:49:08.991904   48438 command_runner.go:130] >         "status": true,
	I1212 19:49:08.991918   48438 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 19:49:08.991921   48438 command_runner.go:130] >       },
	I1212 19:49:08.991925   48438 command_runner.go:130] >       {
	I1212 19:49:08.991939   48438 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 19:49:08.991955   48438 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 19:49:08.991963   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.991967   48438 command_runner.go:130] >         "type": "NetworkReady"
	I1212 19:49:08.991970   48438 command_runner.go:130] >       },
	I1212 19:49:08.991989   48438 command_runner.go:130] >       {
	I1212 19:49:08.992014   48438 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 19:49:08.992028   48438 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 19:49:08.992037   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.992042   48438 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 19:49:08.992045   48438 command_runner.go:130] >       }
	I1212 19:49:08.992058   48438 command_runner.go:130] >     ]
	I1212 19:49:08.992068   48438 command_runner.go:130] >   }
	I1212 19:49:08.992071   48438 command_runner.go:130] > }
	I1212 19:49:08.994409   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:08.994432   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:08.994453   48438 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:49:08.994474   48438 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:49:08.994579   48438 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:49:08.994644   48438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:49:09.001254   48438 command_runner.go:130] > kubeadm
	I1212 19:49:09.001273   48438 command_runner.go:130] > kubectl
	I1212 19:49:09.001277   48438 command_runner.go:130] > kubelet
	I1212 19:49:09.002097   48438 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:49:09.002172   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:49:09.009620   48438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:49:09.025282   48438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:49:09.038423   48438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 19:49:09.054506   48438 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:49:09.058001   48438 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:49:09.058066   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:09.175064   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:09.445347   48438 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:49:09.445426   48438 certs.go:195] generating shared ca certs ...
	I1212 19:49:09.445484   48438 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:09.445704   48438 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:49:09.445799   48438 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:49:09.445839   48438 certs.go:257] generating profile certs ...
	I1212 19:49:09.446025   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:49:09.446164   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:49:09.446275   48438 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:49:09.446313   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:49:09.446386   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:49:09.446438   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:49:09.446492   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:49:09.446544   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:49:09.446605   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:49:09.446663   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:49:09.446721   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:49:09.446856   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:49:09.446943   48438 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:49:09.447016   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:49:09.447074   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:49:09.447157   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:49:09.447233   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:49:09.447516   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:09.447598   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.447652   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.447686   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.448483   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:49:09.470612   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:49:09.491665   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:49:09.514138   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:49:09.535795   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:49:09.552964   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:49:09.570164   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:49:09.587343   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:49:09.604384   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:49:09.621471   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:49:09.638910   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:49:09.656615   48438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:49:09.669235   48438 ssh_runner.go:195] Run: openssl version
	I1212 19:49:09.674787   48438 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:49:09.675343   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.682988   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:49:09.690425   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.693996   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694309   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694370   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.734801   48438 command_runner.go:130] > 3ec20f2e
	I1212 19:49:09.735274   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:09.742485   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.749966   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:49:09.757755   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761677   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761712   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761771   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.803349   48438 command_runner.go:130] > b5213941
	I1212 19:49:09.803809   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:49:09.811062   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.818242   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:49:09.825568   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829043   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829382   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829462   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.872087   48438 command_runner.go:130] > 51391683
	I1212 19:49:09.872525   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:49:09.879635   48438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883004   48438 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883053   48438 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:49:09.883072   48438 command_runner.go:130] > Device: 259,1	Inode: 1317518     Links: 1
	I1212 19:49:09.883079   48438 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:09.883085   48438 command_runner.go:130] > Access: 2025-12-12 19:45:02.427863285 +0000
	I1212 19:49:09.883090   48438 command_runner.go:130] > Modify: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883095   48438 command_runner.go:130] > Change: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883100   48438 command_runner.go:130] >  Birth: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883177   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:49:09.925331   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.925758   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:49:09.966336   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.966825   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:49:10.007601   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.008047   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:49:10.052009   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.052500   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:49:10.094223   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.094385   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:49:10.136742   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.136814   48438 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:10.136904   48438 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:49:10.136973   48438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:49:10.167070   48438 cri.go:89] found id: ""
	I1212 19:49:10.167141   48438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:49:10.174626   48438 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:49:10.174649   48438 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:49:10.174663   48438 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:49:10.175405   48438 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:49:10.175423   48438 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:49:10.175476   48438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:49:10.183010   48438 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:49:10.183461   48438 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.183602   48438 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384006" cluster setting kubeconfig missing "functional-384006" context setting]
	I1212 19:49:10.183992   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.184411   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.184572   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.185056   48438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:49:10.185097   48438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:49:10.185107   48438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:49:10.185113   48438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:49:10.185120   48438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:49:10.185448   48438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:49:10.185546   48438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:49:10.194572   48438 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 19:49:10.194610   48438 kubeadm.go:602] duration metric: took 19.175488ms to restartPrimaryControlPlane
	I1212 19:49:10.194619   48438 kubeadm.go:403] duration metric: took 57.811789ms to StartCluster
	I1212 19:49:10.194633   48438 settings.go:142] acquiring lock: {Name:mk405cd0853bb1c41336dcaeeb8fe9a56ff7ca00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.194694   48438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.195302   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.195505   48438 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 19:49:10.195860   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:10.195913   48438 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:49:10.195982   48438 addons.go:70] Setting storage-provisioner=true in profile "functional-384006"
	I1212 19:49:10.195999   48438 addons.go:239] Setting addon storage-provisioner=true in "functional-384006"
	I1212 19:49:10.196020   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.196498   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.197078   48438 addons.go:70] Setting default-storageclass=true in profile "functional-384006"
	I1212 19:49:10.197104   48438 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384006"
	I1212 19:49:10.197385   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.200737   48438 out.go:179] * Verifying Kubernetes components...
	I1212 19:49:10.203657   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:10.242694   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.242850   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.243167   48438 addons.go:239] Setting addon default-storageclass=true in "functional-384006"
	I1212 19:49:10.243197   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.243613   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.244264   48438 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:49:10.248400   48438 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.248422   48438 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:49:10.248484   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.280006   48438 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:10.280027   48438 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:49:10.280091   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.292135   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.320079   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.410663   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:10.453525   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.485844   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.196335   48438 node_ready.go:35] waiting up to 6m0s for node "functional-384006" to be "Ready" ...
	I1212 19:49:11.196458   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.196510   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.196726   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196748   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196769   48438 retry.go:31] will retry after 366.342967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196806   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196817   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196823   48438 retry.go:31] will retry after 300.335318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196876   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:11.497399   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.554914   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.558623   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.558688   48438 retry.go:31] will retry after 444.117502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.563799   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:11.619827   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.623191   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.623218   48438 retry.go:31] will retry after 549.294372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.698171   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.698248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.698564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.003014   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.062616   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.066362   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.066391   48438 retry.go:31] will retry after 595.188251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.173715   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.197395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.233993   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.234039   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.234058   48438 retry.go:31] will retry after 392.030002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.626804   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.662348   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.696816   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.696944   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.697262   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.708549   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.715333   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.715413   48438 retry.go:31] will retry after 1.207907286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756481   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.756580   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756630   48438 retry.go:31] will retry after 988.700176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.197091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.197179   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:13.197567   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:13.697358   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.697464   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.697803   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:13.746091   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:13.800035   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.803463   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.803491   48438 retry.go:31] will retry after 829.308427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.923746   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:13.982211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.982249   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.982267   48438 retry.go:31] will retry after 769.179652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.196516   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.196587   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.196865   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.633627   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:14.690489   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.693763   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.693798   48438 retry.go:31] will retry after 2.844765229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.697018   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.697087   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.697405   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.752598   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:14.810008   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.810058   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.810075   48438 retry.go:31] will retry after 1.702576008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:15.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.196581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:15.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:15.697028   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:16.196951   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.197024   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.197313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:16.513895   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:16.577782   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:16.577823   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.577842   48438 retry.go:31] will retry after 3.833463827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.697311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.697616   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.197116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.538823   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:17.596746   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:17.600222   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.600249   48438 retry.go:31] will retry after 2.11378985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.696505   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.696573   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.696885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:18.196556   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:18.197023   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:18.696638   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.696729   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.196736   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.196812   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.696622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.696700   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.696961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.714208   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:19.768038   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:19.771528   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:19.771557   48438 retry.go:31] will retry after 5.800996246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.197387   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.197458   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:20.197788   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:20.412247   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:20.466933   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:20.470625   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.470653   48438 retry.go:31] will retry after 5.197371043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.697099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.697410   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.197198   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.197271   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.197569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.697116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.697371   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.197585   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.697314   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.697647   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:22.697696   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:23.197042   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.197134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:23.697049   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.697429   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.197196   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.197268   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.697001   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.697067   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.697318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.196674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.197011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:25.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:25.573546   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:25.640105   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.640150   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.640168   48438 retry.go:31] will retry after 9.327300318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.668309   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:25.696826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.696923   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.697181   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.735314   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.738857   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.738887   48438 retry.go:31] will retry after 6.705148998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:26.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.197240   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.197490   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:26.697309   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.697408   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.697729   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:27.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.197584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.197871   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:27.197919   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:27.696575   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.696952   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.196680   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.196762   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.197103   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.696675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.196525   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.196926   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:29.697067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:30.197085   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.197181   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.197519   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:30.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.197223   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.197295   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.197605   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.697429   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.697504   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.697832   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:31.697883   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:32.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:32.444273   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:32.498733   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:32.502453   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.502484   48438 retry.go:31] will retry after 9.024395099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.696884   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.696967   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.697298   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.696606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.696862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.196944   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:34.196991   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:34.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.696625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.696943   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.968441   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:35.030670   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:35.034703   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.034735   48438 retry.go:31] will retry after 11.456350697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.196975   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.197325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:35.697091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.697164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.697483   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:36.197206   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.197280   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.197576   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:36.197625   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.697108   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.197157   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.197231   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.197556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.697344   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.697421   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.697737   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.197120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.197393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.697237   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.697313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:38.697751   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:39.197495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.197574   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.197923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:39.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.696663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.696902   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.696978   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.697049   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.697369   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:41.197258   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.197327   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.197601   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:41.197683   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:41.527120   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:41.586633   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:41.590403   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.590436   48438 retry.go:31] will retry after 11.748431511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.696875   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.696951   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.697272   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.196642   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.196731   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.696550   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.696647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.696923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.196548   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.696721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.697043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:43.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:44.196771   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.196840   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.197104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:44.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.196928   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.197005   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.197335   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.696632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.696941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:46.196940   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.197010   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.197309   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:46.197362   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:46.491755   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:46.549211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:46.549254   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.549272   48438 retry.go:31] will retry after 7.577859466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.697552   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.697629   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.697924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.196597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.196927   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.696631   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.696710   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.197015   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.696726   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.697050   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:48.697099   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:49.196624   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.197019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:49.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.696695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.697125   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.197269   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.197350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.197608   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.697495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.697567   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.697901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:50.697955   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:51.196732   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.196803   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:51.696762   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.196971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:53.196528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.196891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:53.196934   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:53.339331   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:53.394698   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:53.398291   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.398322   48438 retry.go:31] will retry after 25.381584091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.696686   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.127648   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:54.185700   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:54.185751   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.185771   48438 retry.go:31] will retry after 18.076319981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.196871   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.196963   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.197226   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.696517   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.696579   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.696863   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:55.196622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.196982   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:55.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:55.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.696691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.196994   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.197059   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.697233   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.697537   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:57.197290   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.197368   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.197681   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:57.197733   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:57.697000   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.697069   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.697304   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.196651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.196993   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.696921   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:59.697380   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:00.197384   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.197775   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:00.696649   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.696725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.197059   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.197145   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.697372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.697463   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.697881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:01.697942   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:02.196547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:02.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.696670   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.196781   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.197108   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.696791   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.696860   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:04.196836   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:04.197301   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:04.696812   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.696891   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.196827   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.196904   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.696566   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:06.196948   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.197026   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.197368   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:06.197422   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:06.697024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.697393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.197202   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.197278   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.197614   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.697404   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.697475   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.697790   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.196467   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.196533   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.696513   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.696584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.696925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:08.696997   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:09.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:09.696629   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.696947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.197069   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.197157   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.697420   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.697769   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:10.697839   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:11.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.197258   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:11.697392   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.697467   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.697811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.197401   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.197473   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.197766   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.263038   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:12.317640   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:12.321089   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.321118   48438 retry.go:31] will retry after 33.331276854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:13.196651   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:13.197046   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:13.696602   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.696674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.696975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.196564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.696632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.696719   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:15.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.196715   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:15.197085   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:15.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.696791   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.697104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.197135   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.197570   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.697411   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.697489   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.697833   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.196534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.196602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.196867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.696709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.697053   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:17.697120   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:18.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.196724   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.696950   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.780412   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:18.840261   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:18.840307   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:18.840327   48438 retry.go:31] will retry after 31.549397312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:19.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.196999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:19.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:20.197071   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.197171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.197499   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:20.197554   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:20.697293   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.697711   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.197239   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.197699   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.697463   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:22.197573   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.197648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.197961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:22.198017   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:22.696673   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.697109   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.196692   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.196763   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.197088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.697041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.196735   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.197141   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.696553   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.696913   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:24.696962   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:25.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:25.696594   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.196805   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.196888   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.197147   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.696608   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:26.697078   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:27.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.197036   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:27.696717   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.696786   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.697091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.196811   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.196880   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.197204   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.696604   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:28.697101   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:29.196569   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:29.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.697016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.196809   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.196906   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.197224   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:31.196990   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.197061   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.197407   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:31.197465   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:31.697274   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.697677   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.197397   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.697173   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.697264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.697607   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:33.197434   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.197509   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.197848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:33.197901   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:33.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.696597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.696851   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.696533   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.196543   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:35.697050   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:36.197040   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.197129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.197456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:36.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.697338   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.197319   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.197399   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.197705   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.697534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.697606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.697891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:37.697935   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:38.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.197041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:38.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.196728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.197038   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.696547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.696879   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:40.197486   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.197559   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.197900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:40.197971   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:40.696503   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.696917   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.196679   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.196745   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.696662   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.696734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.697088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.196929   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.197017   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.697020   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.697350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:42.697390   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:43.197172   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.197578   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:43.697431   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.697521   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.697836   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.196857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.696646   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.697013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.196875   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.196959   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.197384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:45.197450   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:45.653170   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:45.696473   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.696544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.696768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.722043   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722078   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722170   48438 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:46.197149   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:46.697218   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.697285   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.697603   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.197403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.697075   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.697158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.697475   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:47.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:48.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.197195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:48.697105   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.697174   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.697455   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.197123   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.197523   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.697198   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.697276   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.697615   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:49.697669   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:50.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.197708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:50.390183   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:50.447451   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447486   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447560   48438 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:50.450729   48438 out.go:179] * Enabled addons: 
	I1212 19:50:50.452858   48438 addons.go:530] duration metric: took 1m40.25694205s for enable addons: enabled=[]
	I1212 19:50:50.697432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.697527   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.697885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.196816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.197159   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:52.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:52.197004   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.196675   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.196744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.696741   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.697070   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:54.196757   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.197113   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:54.197157   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:54.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.696641   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.196708   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.197136   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.696829   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.697229   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:56.197066   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.197387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:56.197429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:56.697219   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.697315   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.697648   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.197432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.197513   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.197815   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.696494   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.696561   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.696813   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.196619   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.196701   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.197024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.696727   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.697094   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:58.697138   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:59.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.196633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.196941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:59.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.197073   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.697403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:00.697447   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:01.197252   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.197345   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.197675   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:01.697471   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.697549   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.697859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:03.196694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.196766   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.197077   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:03.197130   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:03.696767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.696834   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.697143   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.196630   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.196704   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.696694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.696764   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.697055   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.196712   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.196795   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:05.697052   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:06.197016   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.197103   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.197772   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:06.696478   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.696543   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.696795   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.197509   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.197581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.197882   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.696523   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.696601   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.696891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:08.197181   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.197247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:08.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:08.697328   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.697400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.196841   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.196931   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.697005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:10.197489   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.197571   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.197956   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:10.198032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:10.696692   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.696765   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.197068   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.197318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.697124   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.697195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.697510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.197311   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.197738   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:12.697398   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:13.197118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.197189   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.197491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:13.697316   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.197091   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.197349   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.697131   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.697203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.697525   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:14.697582   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:15.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.197446   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.197768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:15.697037   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.697362   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.197215   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.197294   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.697439   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.697512   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.697826   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:16.697889   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:17.196515   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.196583   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.196839   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:17.696542   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.696615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.196616   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.196690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.197045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.696955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:19.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.196953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:19.197013   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:19.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.196767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.196839   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.696858   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.696961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.697324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:21.197132   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.197203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:21.197569   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:21.697039   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.697448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.197274   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.197346   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.197691   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.697485   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.697564   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:23.697041   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:24.196710   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.196779   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.197091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:24.696706   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.696797   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.697093   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.196644   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.196722   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.197069   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.696784   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.696867   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.697150   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:25.697198   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:26.197035   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.197360   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:26.697146   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.697218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.697508   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.197323   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.197404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.197694   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.697053   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.697134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:27.697428   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:28.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:28.697377   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.697453   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.697770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.197025   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.197094   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.197350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.697086   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.697156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:29.697528   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:30.197322   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.197752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:30.697118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.697210   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.197427   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.197859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.697428   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.697506   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.697848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:31.697924   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:32.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.196639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.696650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.696689   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.696760   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.697011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:34.196690   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.196767   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.197161   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:34.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:34.696863   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.696936   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.697252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:36.196823   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.196902   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.197231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:36.197287   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:36.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.696939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.196647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.196985   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.696709   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.696787   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.697120   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.196638   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.196961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.696706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:38.697136   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:39.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:39.696926   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.697255   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.197304   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.197713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.697536   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.697608   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.697930   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:40.697980   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:41.196802   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.196879   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.197213   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:41.696612   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.696972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.196646   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:43.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.197021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:43.197077   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:43.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.696817   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.697134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.196553   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.196885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.696676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:45.199987   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.200075   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.200389   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:45.200457   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:45.696961   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.697027   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.697297   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.197211   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.197284   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.197636   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.697445   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.697530   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.697884   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.196573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:47.697055   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:48.196488   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.196562   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.196880   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:48.696551   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.197013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.696738   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.696820   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:49.697232   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:50.197074   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.197154   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.197448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:50.697257   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.697328   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.697663   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.197618   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.697235   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.697312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.697612   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:51.697676   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:52.197406   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.197485   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.197812   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:52.696535   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.696945   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.196550   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.696615   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.697001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:54.196603   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:54.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:54.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.696831   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.697099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.196853   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.697052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:56.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.196930   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.197194   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:56.197240   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:56.696597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.697030   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.196665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.196998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.696676   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.696744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:58.697049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:59.196681   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.196753   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:59.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.696659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.197205   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.197290   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.197625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.697067   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.697141   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.697476   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:00.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:01.197420   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.197846   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:01.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.696637   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.196972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.696648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.696964   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:03.196609   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.197049   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:03.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:03.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.696832   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.697081   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.196706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.197052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.696824   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.697154   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:05.196838   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.197234   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:05.197290   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:05.696932   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.697009   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.697331   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.197237   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.197311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.697120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.697379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:07.197151   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.197514   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:07.197560   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:07.697323   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.697404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.697708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.197031   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.197097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.697148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.697227   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.697556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:09.197392   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.197784   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:09.197845   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:09.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.696616   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.696887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.196962   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.197039   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.197334   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.696717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.697024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.196847   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.196921   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.696601   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:11.697032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:12.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.196650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:12.696545   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.696869   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.696589   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.697006   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:13.697058   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:14.196560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.196946   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:14.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.697058   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.696653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:16.196956   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.197033   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.197379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:16.197433   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:16.696942   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.697013   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.697325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.197029   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.697015   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.697084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.196717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.696628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.696875   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:18.696923   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:19.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.196654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.196987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:19.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.696605   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.696921   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.196969   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.197044   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.197330   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:20.697054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:21.197019   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.197109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.197420   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:21.697065   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.697171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.197327   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.197732   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.697523   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.697602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:22.697961   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:23.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.196653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.196911   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:23.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.196615   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.696620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.696867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:25.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.196989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:25.197049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:25.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.696823   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.697176   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.197032   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.197365   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.697207   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:27.197240   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.197651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:27.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:27.696997   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.697111   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.697348   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.197218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.197538   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.697444   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.697821   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.197283   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.197351   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.197604   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.697409   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.697482   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.697829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:29.697881   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:30.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.196718   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:30.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.196917   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.196985   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.197286   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.696671   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:32.196637   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.196716   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.196973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:32.197032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:32.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.696739   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.697092   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.196825   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.697364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:34.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:34.197557   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:34.697300   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.697378   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.197158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.197415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.697129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.697418   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:36.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.197573   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:36.197628   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:36.697048   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.697374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.197145   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.697438   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.697758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.697121   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.697188   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:38.697564   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:39.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.197541   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:39.697045   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.697416   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.197422   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.197841   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:41.196830   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.197165   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:41.197208   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:41.696885   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.696962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.697302   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.197136   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.197480   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.697034   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.697359   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:43.197128   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.197206   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.197560   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:43.197616   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:43.697366   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.697437   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.197119   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.697154   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.697224   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.697554   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:45.197418   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.197622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.198043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:45.198111   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:45.696799   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.696866   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.697155   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.197330   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.197994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.696797   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.696869   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.697189   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.197254   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:47.697081   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:48.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.196659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.196981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:48.696595   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.196596   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.196668   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.696687   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.697080   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:49.697134   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:50.197041   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.197117   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.697281   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.697595   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.197221   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.197312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.197623   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.697142   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:51.697429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:52.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.197264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.197590   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:52.697371   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.697445   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.697761   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.197099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.197352   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.697175   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.697245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.697552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:53.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:54.197356   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.197428   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.197758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:54.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.697377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.197228   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.197547   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.697340   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.697762   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:55.697823   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:56.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.197494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:56.697202   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.697282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.697569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.197403   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.696985   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.697054   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.697293   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:58.196950   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.197019   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:58.197379   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:58.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.697147   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.697456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.197066   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.197315   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.697129   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.697205   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.697493   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.196851   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.196939   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.197273   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.697007   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.697073   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.697327   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:00.697369   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:01.197219   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.197300   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.197664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:01.697490   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.697570   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.196566   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.196991   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.696584   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.696978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:03.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.196736   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.197062   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:03.197116   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:03.696744   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.696816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.696667   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.696738   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.196625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.196924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.696707   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:05.697088   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:06.197081   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.197462   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:06.697010   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.697082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.696603   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:08.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.196859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:08.196896   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:08.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.696893   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.196605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.696819   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.697219   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:10.197180   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.197631   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:10.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:10.697487   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.697560   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.196944   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.197056   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.696930   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.697002   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.196909   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.196979   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.197321   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.697013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.697077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.697339   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:12.697378   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:13.197093   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.197164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.197492   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:13.697289   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.697359   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.197038   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.197112   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.197374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.697235   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.697577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:14.697635   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:15.197270   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.197347   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.197686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:15.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.697098   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.697375   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.697350   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.697752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:16.697808   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:17.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.197577   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.197829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:17.696504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.696575   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.696899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.196504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.196576   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.696900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:19.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.197008   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:19.197061   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:19.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.196979   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.197046   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.197295   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.696583   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.696657   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.696990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:21.196822   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:21.197296   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:21.696752   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.696826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.697073   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.196577   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.196688   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.696698   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.696777   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:23.697150   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:24.196817   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.196890   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.197211   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:24.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.696634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.696929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.196601   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:26.196896   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:26.197253   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:26.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.696649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.696619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.196604   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.196939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.696931   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:28.696979   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:29.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.196703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.197001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:29.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.696669   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.196961   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.197040   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:30.697048   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:31.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.197435   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:31.697109   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.697183   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.697494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.198094   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.198180   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.198485   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.697418   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.697743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:32.697798   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:33.197520   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.197607   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.197978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:33.696536   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.696612   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.196586   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.696683   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.696755   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.697071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:35.196545   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:35.196957   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:35.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.696639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.197118   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.197425   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.697036   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.697356   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:37.197167   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.197245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.197543   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:37.197597   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:37.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.697332   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.197013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.197090   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.697110   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.697196   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.697532   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:39.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.197405   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.197724   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:39.197779   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:39.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.697132   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.697395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.197348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.197427   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.197783   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.697442   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.697518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.697857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.196820   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.197188   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:41.697059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:42.199056   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.199156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.199500   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:42.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.197216   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.697051   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.697127   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.697442   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:43.697496   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:44.196983   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.197047   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.197291   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:44.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.196640   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.196734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.197134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.696663   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.696987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:46.196891   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.197246   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:46.197291   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:46.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.196530   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.196610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.696962   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.196628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.696581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.696845   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:48.696887   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:49.196542   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.196995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:49.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.196908   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.196982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.197236   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:50.697100   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:51.197065   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.197137   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.197471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:51.697096   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.697167   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.697415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.197178   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.197545   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.697252   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.697323   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.697637   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:52.697692   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:53.197047   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.197114   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.197377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:53.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.697217   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.197316   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.197626   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.697384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:55.197154   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.197226   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:55.197594   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:55.697076   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.697150   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.697464   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.197358   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.197424   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.197682   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.697448   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.697524   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.697853   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.196589   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.196672   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.197005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.696668   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.696743   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:57.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:58.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.196721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:58.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.696813   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.697128   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.196544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.196620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.696616   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:00.196755   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.196856   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.197201   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:00.197255   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:00.696903   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.696982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.697296   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.197260   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.197599   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.697267   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.697339   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:02.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.197122   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:02.197430   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:02.697170   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.697265   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.697621   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.197435   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.197849   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.696519   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.696591   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.196681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.197029   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.696731   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:04.697174   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:05.196541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:05.696572   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.196971   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.197372   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.696982   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.697050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.697313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:06.697353   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:07.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.197223   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.197552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:07.697346   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.697416   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.697736   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.197037   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.697238   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.697572   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:08.697622   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:09.197260   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.197335   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.197650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:09.697011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.697085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.197549   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.197634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.197971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.696971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:11.196845   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.196925   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.197172   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:11.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:11.696846   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.696918   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.697216   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.696933   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.197087   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.696789   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.696882   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:13.697285   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:14.196910   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.196976   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.197328   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:14.697114   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.697187   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.697517   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.197329   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.197401   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.197739   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.697022   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.697438   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:15.697494   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:16.197185   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.197263   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.197574   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:16.697365   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.697441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.197010   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.197077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.197323   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:18.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.196691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.197012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:18.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:18.696737   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.697100   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.196598   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.696701   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.696780   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.697061   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.196624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.697017   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:20.697069   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:21.196889   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.196962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.197310   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:21.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.697034   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:22.697092   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:23.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.196862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:23.696546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.696624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.696934   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.196592   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.197022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.696534   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:25.196574   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.196649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:25.197054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:25.696715   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.697123   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.197139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.697204   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.697275   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.697575   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:27.197337   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.197409   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.197721   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:27.197781   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:27.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.197230   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.197559   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.697713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.197011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.197084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.697150   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.697222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.697555   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:29.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:30.197366   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.197441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.197781   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:30.696486   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.696556   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.696811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.196900   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.196971   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.197252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.696927   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.697006   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.697340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:32.196886   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.196974   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.197251   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:32.197302   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.196682   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.696765   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.197010   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.697098   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:34.697159   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:35.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:35.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.197131   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.197248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.197583   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.697092   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:36.697376   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:37.197121   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.197202   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.197549   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:37.697356   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.697755   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.196491   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.196580   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.196847   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.696555   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:39.196611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:39.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:39.696650   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.696973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.197051   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.197510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.697434   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.697779   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:41.197126   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.197489   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:41.197543   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:41.697273   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.697678   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.197609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.197692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.198720   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.697042   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.697110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.697353   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:43.197138   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.197208   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:43.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:43.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.697491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.197018   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.197082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.197326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:45.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.196924   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.201386   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:54:45.201507   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:45.697031   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.197134   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.197531   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.697301   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.697388   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.697735   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.197422   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.697238   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.697317   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.697650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:47.697707   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:48.197476   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.197548   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.197868   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:48.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.696600   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.696881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.196620   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.197016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.696697   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.696774   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:50.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.197414   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:50.197468   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.697277   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.697625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.197524   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.197596   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.197883   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.696529   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.696602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.696953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.196625   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.196695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.197003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.696636   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.696938   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:52.696988   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:53.196618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.196689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.196965   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:53.696623   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.696694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.697045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.196759   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.196833   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.197151   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.696895   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:55.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.196967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:55.197009   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:55.696707   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.697095   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.197044   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.697176   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.697247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.697564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:57.197362   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.197770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:57.197827   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:57.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.696582   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.696850   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.196910   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.696617   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.196642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.696689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:59.697072   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:00.196536   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.196632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:00.696665   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.696746   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.197001   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.197085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.197440   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.697258   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.697671   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:01.697735   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:02.197006   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.197095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:02.697262   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.697664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.197461   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.197544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.197886   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.696539   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.696903   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.196692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:04.197059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:04.696722   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.697084   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.196619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.196920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.696654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:06.196854   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.197258   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:06.197306   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:06.696660   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.696983   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.196575   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.697039   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.196627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:08.697031   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:09.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.196785   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.197099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:09.696682   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.696750   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.196676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.197018   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.696766   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.696855   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:10.697295   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:11.196994   48438 node_ready.go:38] duration metric: took 6m0.000614517s for node "functional-384006" to be "Ready" ...
	I1212 19:55:11.200166   48438 out.go:203] 
	W1212 19:55:11.203009   48438 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 19:55:11.203186   48438 out.go:285] * 
	* 
	W1212 19:55:11.205457   48438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:55:11.208306   48438 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-384006 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.58255717s for "functional-384006" cluster.
I1212 19:55:11.696903    4120 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (415.577605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh sudo cat /usr/share/ca-certificates/41202.pem                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image save kicbase/echo-server:functional-008271 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image rm kicbase/echo-server:functional-008271 --alsologtostderr                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image save --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format short --alsologtostderr                                                                                                     │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format yaml --alsologtostderr                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format json --alsologtostderr                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format table --alsologtostderr                                                                                                     │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh pgrep buildkitd                                                                                                                           │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image          │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                          │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete         │ -p functional-008271                                                                                                                                            │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start          │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ start          │ -p functional-384006 --alsologtostderr -v=8                                                                                                                     │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:49 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:49:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:49:06.161667   48438 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:49:06.161882   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.161913   48438 out.go:374] Setting ErrFile to fd 2...
	I1212 19:49:06.161935   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.162192   48438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:49:06.162605   48438 out.go:368] Setting JSON to false
	I1212 19:49:06.163501   48438 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1896,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:49:06.163603   48438 start.go:143] virtualization:  
	I1212 19:49:06.167059   48438 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:49:06.170023   48438 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:49:06.170127   48438 notify.go:221] Checking for updates...
	I1212 19:49:06.175791   48438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:49:06.178620   48438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:06.181479   48438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:49:06.184334   48438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:49:06.187177   48438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:49:06.190472   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:06.190582   48438 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:49:06.226589   48438 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:49:06.226705   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.287038   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.278380602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.287144   48438 docker.go:319] overlay module found
	I1212 19:49:06.290214   48438 out.go:179] * Using the docker driver based on existing profile
	I1212 19:49:06.293103   48438 start.go:309] selected driver: docker
	I1212 19:49:06.293122   48438 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.293257   48438 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:49:06.293353   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.346602   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.338111982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.347001   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:06.347058   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:06.347109   48438 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.350199   48438 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:49:06.353090   48438 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:49:06.356052   48438 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:49:06.358945   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:06.359005   48438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:49:06.359039   48438 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:49:06.359049   48438 cache.go:65] Caching tarball of preloaded images
	I1212 19:49:06.359132   48438 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:49:06.359143   48438 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:49:06.359246   48438 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:49:06.377622   48438 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:49:06.377646   48438 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:49:06.377660   48438 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:49:06.377689   48438 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:49:06.377751   48438 start.go:364] duration metric: took 39.285µs to acquireMachinesLock for "functional-384006"
	I1212 19:49:06.377774   48438 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:49:06.377781   48438 fix.go:54] fixHost starting: 
	I1212 19:49:06.378037   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:06.394046   48438 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:49:06.394073   48438 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:49:06.397347   48438 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:49:06.397378   48438 machine.go:94] provisionDockerMachine start ...
	I1212 19:49:06.397470   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.413547   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.413876   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.413891   48438 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:49:06.567084   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.567107   48438 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:49:06.567205   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.584099   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.584405   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.584422   48438 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:49:06.744613   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.744691   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.765941   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.766253   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.766274   48438 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:49:06.919909   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:49:06.919937   48438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:49:06.919964   48438 ubuntu.go:190] setting up certificates
	I1212 19:49:06.919986   48438 provision.go:84] configureAuth start
	I1212 19:49:06.920046   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:06.936937   48438 provision.go:143] copyHostCerts
	I1212 19:49:06.936980   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937022   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:49:06.937035   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937107   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:49:06.937204   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937227   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:49:06.937232   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937260   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:49:06.937320   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937341   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:49:06.937354   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937380   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:49:06.937435   48438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:49:07.142288   48438 provision.go:177] copyRemoteCerts
	I1212 19:49:07.142366   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:49:07.142409   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.158934   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.267886   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 19:49:07.267945   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:49:07.284419   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 19:49:07.284477   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:49:07.301465   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 19:49:07.301546   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:49:07.318717   48438 provision.go:87] duration metric: took 398.706755ms to configureAuth
	I1212 19:49:07.318790   48438 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:49:07.319006   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:07.319035   48438 machine.go:97] duration metric: took 921.650297ms to provisionDockerMachine
	I1212 19:49:07.319058   48438 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:49:07.319080   48438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:49:07.319173   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:49:07.319238   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.336520   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.439884   48438 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:49:07.443234   48438 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:49:07.443254   48438 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:49:07.443259   48438 command_runner.go:130] > VERSION_ID="12"
	I1212 19:49:07.443263   48438 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:49:07.443268   48438 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:49:07.443272   48438 command_runner.go:130] > ID=debian
	I1212 19:49:07.443276   48438 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:49:07.443281   48438 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:49:07.443289   48438 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:49:07.443341   48438 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:49:07.443361   48438 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:49:07.443371   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:49:07.443421   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:49:07.443503   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:49:07.443510   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1212 19:49:07.443585   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:49:07.443589   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1212 19:49:07.443629   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:49:07.450818   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:07.468474   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:49:07.485034   48438 start.go:296] duration metric: took 165.952143ms for postStartSetup
	I1212 19:49:07.485111   48438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:49:07.485180   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.502057   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.604226   48438 command_runner.go:130] > 12%
	I1212 19:49:07.604746   48438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:49:07.609551   48438 command_runner.go:130] > 172G
	I1212 19:49:07.609593   48438 fix.go:56] duration metric: took 1.231809331s for fixHost
	I1212 19:49:07.609604   48438 start.go:83] releasing machines lock for "functional-384006", held for 1.231841888s
	I1212 19:49:07.609687   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:07.626230   48438 ssh_runner.go:195] Run: cat /version.json
	I1212 19:49:07.626285   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.626592   48438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:49:07.626649   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.648515   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.651511   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.751468   48438 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:49:07.751688   48438 ssh_runner.go:195] Run: systemctl --version
	I1212 19:49:07.840262   48438 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 19:49:07.843071   48438 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:49:07.843106   48438 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:49:07.843235   48438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:49:07.847707   48438 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:49:07.847791   48438 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:49:07.847870   48438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:49:07.855348   48438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:49:07.855380   48438 start.go:496] detecting cgroup driver to use...
	I1212 19:49:07.855411   48438 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:07.855473   48438 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:49:07.872745   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:49:07.888438   48438 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:49:07.888499   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:49:07.905328   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:49:07.922378   48438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:49:08.040559   48438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:49:08.153632   48438 docker.go:234] disabling docker service ...
	I1212 19:49:08.153749   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:49:08.170255   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:49:08.183563   48438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:49:08.296935   48438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:49:08.413119   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:49:08.425880   48438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:49:08.438681   48438 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:49:08.439732   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:49:08.448541   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:49:08.457430   48438 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:49:08.457506   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:49:08.466099   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.474729   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:49:08.483278   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.491712   48438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:49:08.499807   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:49:08.508171   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:49:08.517078   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:49:08.525348   48438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:49:08.531636   48438 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:49:08.532621   48438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:49:08.539615   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:08.670670   48438 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:49:08.806796   48438 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:49:08.806894   48438 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:49:08.810696   48438 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 19:49:08.810773   48438 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:49:08.810802   48438 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1212 19:49:08.810829   48438 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:08.810848   48438 command_runner.go:130] > Access: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810866   48438 command_runner.go:130] > Modify: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810881   48438 command_runner.go:130] > Change: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810904   48438 command_runner.go:130] >  Birth: -
	I1212 19:49:08.811086   48438 start.go:564] Will wait 60s for crictl version
	I1212 19:49:08.811174   48438 ssh_runner.go:195] Run: which crictl
	I1212 19:49:08.814485   48438 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:49:08.814611   48438 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:49:08.838884   48438 command_runner.go:130] > Version:  0.1.0
	I1212 19:49:08.838955   48438 command_runner.go:130] > RuntimeName:  containerd
	I1212 19:49:08.838976   48438 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 19:49:08.838997   48438 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:49:08.840776   48438 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:49:08.840864   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.863238   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.864954   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.884422   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.891508   48438 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:49:08.894468   48438 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:49:08.910430   48438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:49:08.914297   48438 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 19:49:08.914409   48438 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:49:08.914505   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:08.914560   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.938916   48438 command_runner.go:130] > {
	I1212 19:49:08.938935   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.938940   48438 command_runner.go:130] >     {
	I1212 19:49:08.938949   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.938953   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.938959   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.938962   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938967   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.938980   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.938983   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938988   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.938991   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.938995   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.938998   48438 command_runner.go:130] >     },
	I1212 19:49:08.939001   48438 command_runner.go:130] >     {
	I1212 19:49:08.939009   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.939013   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939018   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.939022   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939034   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.939038   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.939049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939053   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939056   48438 command_runner.go:130] >     },
	I1212 19:49:08.939059   48438 command_runner.go:130] >     {
	I1212 19:49:08.939066   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.939069   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939075   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.939078   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939084   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939091   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.939095   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939100   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.939104   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.939108   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939111   48438 command_runner.go:130] >     },
	I1212 19:49:08.939115   48438 command_runner.go:130] >     {
	I1212 19:49:08.939121   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.939125   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939130   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.939133   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939137   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939154   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.939157   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939161   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.939166   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939170   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939173   48438 command_runner.go:130] >       },
	I1212 19:49:08.939177   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939181   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939184   48438 command_runner.go:130] >     },
	I1212 19:49:08.939187   48438 command_runner.go:130] >     {
	I1212 19:49:08.939193   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.939200   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939206   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.939209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939213   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939220   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.939224   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939228   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.939231   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939241   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939244   48438 command_runner.go:130] >       },
	I1212 19:49:08.939248   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939252   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939254   48438 command_runner.go:130] >     },
	I1212 19:49:08.939257   48438 command_runner.go:130] >     {
	I1212 19:49:08.939264   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.939268   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939273   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.939276   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939280   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939288   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.939291   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939295   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.939299   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939302   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939305   48438 command_runner.go:130] >       },
	I1212 19:49:08.939309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939316   48438 command_runner.go:130] >     },
	I1212 19:49:08.939319   48438 command_runner.go:130] >     {
	I1212 19:49:08.939326   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.939330   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939334   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.939338   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939345   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939353   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.939356   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939360   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.939364   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939368   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939370   48438 command_runner.go:130] >     },
	I1212 19:49:08.939375   48438 command_runner.go:130] >     {
	I1212 19:49:08.939381   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.939385   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939390   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.939393   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939397   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939405   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.939408   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939412   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.939416   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939420   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939423   48438 command_runner.go:130] >       },
	I1212 19:49:08.939427   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939430   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939433   48438 command_runner.go:130] >     },
	I1212 19:49:08.939437   48438 command_runner.go:130] >     {
	I1212 19:49:08.939443   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.939447   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939452   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.939454   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939458   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939465   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.939469   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939473   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.939476   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939480   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.939486   48438 command_runner.go:130] >       },
	I1212 19:49:08.939490   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939493   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.939496   48438 command_runner.go:130] >     }
	I1212 19:49:08.939499   48438 command_runner.go:130] >   ]
	I1212 19:49:08.939502   48438 command_runner.go:130] > }
	I1212 19:49:08.940984   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.941004   48438 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:49:08.941060   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.962883   48438 command_runner.go:130] > {
	I1212 19:49:08.962905   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.962910   48438 command_runner.go:130] >     {
	I1212 19:49:08.962919   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.962924   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.962930   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.962934   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962938   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.962948   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.962955   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962964   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.962971   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.962975   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.962985   48438 command_runner.go:130] >     },
	I1212 19:49:08.962993   48438 command_runner.go:130] >     {
	I1212 19:49:08.963005   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.963012   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963017   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.963021   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963035   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.963040   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.963049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963055   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963058   48438 command_runner.go:130] >     },
	I1212 19:49:08.963064   48438 command_runner.go:130] >     {
	I1212 19:49:08.963071   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.963081   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963086   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.963090   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963104   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963113   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.963116   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963123   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.963127   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.963132   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963137   48438 command_runner.go:130] >     },
	I1212 19:49:08.963146   48438 command_runner.go:130] >     {
	I1212 19:49:08.963157   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.963170   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963175   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.963178   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963187   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963198   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.963201   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963210   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.963214   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963221   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963224   48438 command_runner.go:130] >       },
	I1212 19:49:08.963228   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963234   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963238   48438 command_runner.go:130] >     },
	I1212 19:49:08.963241   48438 command_runner.go:130] >     {
	I1212 19:49:08.963248   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.963255   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963260   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.963263   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963266   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963274   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.963281   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963285   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.963288   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963298   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963302   48438 command_runner.go:130] >       },
	I1212 19:49:08.963309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963319   48438 command_runner.go:130] >     },
	I1212 19:49:08.963322   48438 command_runner.go:130] >     {
	I1212 19:49:08.963329   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.963336   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963341   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.963344   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963348   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963356   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.963363   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963367   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.963370   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963374   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963382   48438 command_runner.go:130] >       },
	I1212 19:49:08.963389   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963393   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963396   48438 command_runner.go:130] >     },
	I1212 19:49:08.963399   48438 command_runner.go:130] >     {
	I1212 19:49:08.963406   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.963413   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963418   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.963421   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963425   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963433   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.963440   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963444   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.963448   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963452   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963455   48438 command_runner.go:130] >     },
	I1212 19:49:08.963458   48438 command_runner.go:130] >     {
	I1212 19:49:08.963465   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.963472   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963478   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.963483   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963487   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963498   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.963503   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963509   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.963515   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963518   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963521   48438 command_runner.go:130] >       },
	I1212 19:49:08.963525   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963529   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963534   48438 command_runner.go:130] >     },
	I1212 19:49:08.963537   48438 command_runner.go:130] >     {
	I1212 19:49:08.963547   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.963555   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963560   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.963566   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963570   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963580   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.963587   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963591   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.963594   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963598   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.963604   48438 command_runner.go:130] >       },
	I1212 19:49:08.963611   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963615   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.963618   48438 command_runner.go:130] >     }
	I1212 19:49:08.963621   48438 command_runner.go:130] >   ]
	I1212 19:49:08.963624   48438 command_runner.go:130] > }
	I1212 19:49:08.965735   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.965756   48438 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:49:08.965764   48438 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:49:08.965868   48438 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:49:08.965936   48438 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:49:08.990907   48438 command_runner.go:130] > {
	I1212 19:49:08.990927   48438 command_runner.go:130] >   "cniconfig": {
	I1212 19:49:08.990932   48438 command_runner.go:130] >     "Networks": [
	I1212 19:49:08.990936   48438 command_runner.go:130] >       {
	I1212 19:49:08.990942   48438 command_runner.go:130] >         "Config": {
	I1212 19:49:08.990947   48438 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 19:49:08.990980   48438 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 19:49:08.990997   48438 command_runner.go:130] >           "Plugins": [
	I1212 19:49:08.991002   48438 command_runner.go:130] >             {
	I1212 19:49:08.991010   48438 command_runner.go:130] >               "Network": {
	I1212 19:49:08.991014   48438 command_runner.go:130] >                 "ipam": {},
	I1212 19:49:08.991020   48438 command_runner.go:130] >                 "type": "loopback"
	I1212 19:49:08.991023   48438 command_runner.go:130] >               },
	I1212 19:49:08.991033   48438 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 19:49:08.991041   48438 command_runner.go:130] >             }
	I1212 19:49:08.991063   48438 command_runner.go:130] >           ],
	I1212 19:49:08.991073   48438 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 19:49:08.991080   48438 command_runner.go:130] >         },
	I1212 19:49:08.991089   48438 command_runner.go:130] >         "IFName": "lo"
	I1212 19:49:08.991095   48438 command_runner.go:130] >       }
	I1212 19:49:08.991098   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991103   48438 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 19:49:08.991106   48438 command_runner.go:130] >     "PluginDirs": [
	I1212 19:49:08.991109   48438 command_runner.go:130] >       "/opt/cni/bin"
	I1212 19:49:08.991113   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991117   48438 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 19:49:08.991135   48438 command_runner.go:130] >     "Prefix": "eth"
	I1212 19:49:08.991151   48438 command_runner.go:130] >   },
	I1212 19:49:08.991154   48438 command_runner.go:130] >   "config": {
	I1212 19:49:08.991158   48438 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 19:49:08.991171   48438 command_runner.go:130] >       "/etc/cdi",
	I1212 19:49:08.991184   48438 command_runner.go:130] >       "/var/run/cdi"
	I1212 19:49:08.991188   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991191   48438 command_runner.go:130] >     "cni": {
	I1212 19:49:08.991195   48438 command_runner.go:130] >       "binDir": "",
	I1212 19:49:08.991202   48438 command_runner.go:130] >       "binDirs": [
	I1212 19:49:08.991206   48438 command_runner.go:130] >         "/opt/cni/bin"
	I1212 19:49:08.991209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.991216   48438 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 19:49:08.991220   48438 command_runner.go:130] >       "confTemplate": "",
	I1212 19:49:08.991224   48438 command_runner.go:130] >       "ipPref": "",
	I1212 19:49:08.991227   48438 command_runner.go:130] >       "maxConfNum": 1,
	I1212 19:49:08.991231   48438 command_runner.go:130] >       "setupSerially": false,
	I1212 19:49:08.991235   48438 command_runner.go:130] >       "useInternalLoopback": false
	I1212 19:49:08.991248   48438 command_runner.go:130] >     },
	I1212 19:49:08.991264   48438 command_runner.go:130] >     "containerd": {
	I1212 19:49:08.991273   48438 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 19:49:08.991288   48438 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 19:49:08.991302   48438 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 19:49:08.991311   48438 command_runner.go:130] >       "runtimes": {
	I1212 19:49:08.991317   48438 command_runner.go:130] >         "runc": {
	I1212 19:49:08.991321   48438 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 19:49:08.991325   48438 command_runner.go:130] >           "PodAnnotations": null,
	I1212 19:49:08.991329   48438 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 19:49:08.991340   48438 command_runner.go:130] >           "cgroupWritable": false,
	I1212 19:49:08.991344   48438 command_runner.go:130] >           "cniConfDir": "",
	I1212 19:49:08.991347   48438 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 19:49:08.991351   48438 command_runner.go:130] >           "io_type": "",
	I1212 19:49:08.991366   48438 command_runner.go:130] >           "options": {
	I1212 19:49:08.991378   48438 command_runner.go:130] >             "BinaryName": "",
	I1212 19:49:08.991382   48438 command_runner.go:130] >             "CriuImagePath": "",
	I1212 19:49:08.991386   48438 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 19:49:08.991400   48438 command_runner.go:130] >             "IoGid": 0,
	I1212 19:49:08.991410   48438 command_runner.go:130] >             "IoUid": 0,
	I1212 19:49:08.991414   48438 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 19:49:08.991418   48438 command_runner.go:130] >             "Root": "",
	I1212 19:49:08.991422   48438 command_runner.go:130] >             "ShimCgroup": "",
	I1212 19:49:08.991427   48438 command_runner.go:130] >             "SystemdCgroup": false
	I1212 19:49:08.991433   48438 command_runner.go:130] >           },
	I1212 19:49:08.991439   48438 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 19:49:08.991455   48438 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 19:49:08.991461   48438 command_runner.go:130] >           "runtimePath": "",
	I1212 19:49:08.991476   48438 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 19:49:08.991487   48438 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 19:49:08.991491   48438 command_runner.go:130] >           "snapshotter": ""
	I1212 19:49:08.991503   48438 command_runner.go:130] >         }
	I1212 19:49:08.991510   48438 command_runner.go:130] >       }
	I1212 19:49:08.991513   48438 command_runner.go:130] >     },
	I1212 19:49:08.991525   48438 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 19:49:08.991540   48438 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 19:49:08.991547   48438 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 19:49:08.991554   48438 command_runner.go:130] >     "disableApparmor": false,
	I1212 19:49:08.991559   48438 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 19:49:08.991564   48438 command_runner.go:130] >     "disableProcMount": false,
	I1212 19:49:08.991583   48438 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 19:49:08.991588   48438 command_runner.go:130] >     "enableCDI": true,
	I1212 19:49:08.991603   48438 command_runner.go:130] >     "enableSelinux": false,
	I1212 19:49:08.991616   48438 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 19:49:08.991621   48438 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 19:49:08.991627   48438 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 19:49:08.991634   48438 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 19:49:08.991639   48438 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 19:49:08.991643   48438 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 19:49:08.991653   48438 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 19:49:08.991658   48438 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991662   48438 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 19:49:08.991678   48438 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991689   48438 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 19:49:08.991694   48438 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 19:49:08.991696   48438 command_runner.go:130] >   },
	I1212 19:49:08.991700   48438 command_runner.go:130] >   "features": {
	I1212 19:49:08.991704   48438 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 19:49:08.991706   48438 command_runner.go:130] >   },
	I1212 19:49:08.991710   48438 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 19:49:08.991719   48438 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991728   48438 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991732   48438 command_runner.go:130] >   "runtimeHandlers": [
	I1212 19:49:08.991735   48438 command_runner.go:130] >     {
	I1212 19:49:08.991739   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991743   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991747   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991751   48438 command_runner.go:130] >       }
	I1212 19:49:08.991759   48438 command_runner.go:130] >     },
	I1212 19:49:08.991762   48438 command_runner.go:130] >     {
	I1212 19:49:08.991766   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991770   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991774   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991796   48438 command_runner.go:130] >       },
	I1212 19:49:08.991800   48438 command_runner.go:130] >       "name": "runc"
	I1212 19:49:08.991803   48438 command_runner.go:130] >     }
	I1212 19:49:08.991807   48438 command_runner.go:130] >   ],
	I1212 19:49:08.991875   48438 command_runner.go:130] >   "status": {
	I1212 19:49:08.991889   48438 command_runner.go:130] >     "conditions": [
	I1212 19:49:08.991892   48438 command_runner.go:130] >       {
	I1212 19:49:08.991895   48438 command_runner.go:130] >         "message": "",
	I1212 19:49:08.991899   48438 command_runner.go:130] >         "reason": "",
	I1212 19:49:08.991904   48438 command_runner.go:130] >         "status": true,
	I1212 19:49:08.991918   48438 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 19:49:08.991921   48438 command_runner.go:130] >       },
	I1212 19:49:08.991925   48438 command_runner.go:130] >       {
	I1212 19:49:08.991939   48438 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 19:49:08.991955   48438 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 19:49:08.991963   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.991967   48438 command_runner.go:130] >         "type": "NetworkReady"
	I1212 19:49:08.991970   48438 command_runner.go:130] >       },
	I1212 19:49:08.991989   48438 command_runner.go:130] >       {
	I1212 19:49:08.992014   48438 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 19:49:08.992028   48438 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 19:49:08.992037   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.992042   48438 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 19:49:08.992045   48438 command_runner.go:130] >       }
	I1212 19:49:08.992058   48438 command_runner.go:130] >     ]
	I1212 19:49:08.992068   48438 command_runner.go:130] >   }
	I1212 19:49:08.992071   48438 command_runner.go:130] > }
	I1212 19:49:08.994409   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:08.994432   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:08.994453   48438 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:49:08.994474   48438 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:49:08.994579   48438 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:49:08.994644   48438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:49:09.001254   48438 command_runner.go:130] > kubeadm
	I1212 19:49:09.001273   48438 command_runner.go:130] > kubectl
	I1212 19:49:09.001277   48438 command_runner.go:130] > kubelet
	I1212 19:49:09.002097   48438 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:49:09.002172   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:49:09.009620   48438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:49:09.025282   48438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:49:09.038423   48438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 19:49:09.054506   48438 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:49:09.058001   48438 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:49:09.058066   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:09.175064   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:09.445347   48438 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:49:09.445426   48438 certs.go:195] generating shared ca certs ...
	I1212 19:49:09.445484   48438 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:09.445704   48438 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:49:09.445799   48438 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:49:09.445839   48438 certs.go:257] generating profile certs ...
	I1212 19:49:09.446025   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:49:09.446164   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:49:09.446275   48438 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:49:09.446313   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:49:09.446386   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:49:09.446438   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:49:09.446492   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:49:09.446544   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:49:09.446605   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:49:09.446663   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:49:09.446721   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:49:09.446856   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:49:09.446943   48438 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:49:09.447016   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:49:09.447074   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:49:09.447157   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:49:09.447233   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:49:09.447516   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:09.447598   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.447652   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.447686   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.448483   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:49:09.470612   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:49:09.491665   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:49:09.514138   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:49:09.535795   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:49:09.552964   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:49:09.570164   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:49:09.587343   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:49:09.604384   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:49:09.621471   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:49:09.638910   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:49:09.656615   48438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:49:09.669235   48438 ssh_runner.go:195] Run: openssl version
	I1212 19:49:09.674787   48438 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:49:09.675343   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.682988   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:49:09.690425   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.693996   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694309   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694370   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.734801   48438 command_runner.go:130] > 3ec20f2e
	I1212 19:49:09.735274   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:09.742485   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.749966   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:49:09.757755   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761677   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761712   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761771   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.803349   48438 command_runner.go:130] > b5213941
	I1212 19:49:09.803809   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:49:09.811062   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.818242   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:49:09.825568   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829043   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829382   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829462   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.872087   48438 command_runner.go:130] > 51391683
	I1212 19:49:09.872525   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:49:09.879635   48438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883004   48438 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883053   48438 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:49:09.883072   48438 command_runner.go:130] > Device: 259,1	Inode: 1317518     Links: 1
	I1212 19:49:09.883079   48438 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:09.883085   48438 command_runner.go:130] > Access: 2025-12-12 19:45:02.427863285 +0000
	I1212 19:49:09.883090   48438 command_runner.go:130] > Modify: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883095   48438 command_runner.go:130] > Change: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883100   48438 command_runner.go:130] >  Birth: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883177   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:49:09.925331   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.925758   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:49:09.966336   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.966825   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:49:10.007601   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.008047   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:49:10.052009   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.052500   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:49:10.094223   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.094385   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:49:10.136742   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.136814   48438 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:10.136904   48438 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:49:10.136973   48438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:49:10.167070   48438 cri.go:89] found id: ""
	I1212 19:49:10.167141   48438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:49:10.174626   48438 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:49:10.174649   48438 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:49:10.174663   48438 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:49:10.175405   48438 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:49:10.175423   48438 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:49:10.175476   48438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:49:10.183010   48438 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:49:10.183461   48438 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.183602   48438 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384006" cluster setting kubeconfig missing "functional-384006" context setting]
	I1212 19:49:10.183992   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.184411   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.184572   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.185056   48438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:49:10.185097   48438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:49:10.185107   48438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:49:10.185113   48438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:49:10.185120   48438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:49:10.185448   48438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:49:10.185546   48438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:49:10.194572   48438 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 19:49:10.194610   48438 kubeadm.go:602] duration metric: took 19.175488ms to restartPrimaryControlPlane
	I1212 19:49:10.194619   48438 kubeadm.go:403] duration metric: took 57.811789ms to StartCluster
	I1212 19:49:10.194633   48438 settings.go:142] acquiring lock: {Name:mk405cd0853bb1c41336dcaeeb8fe9a56ff7ca00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.194694   48438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.195302   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.195505   48438 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 19:49:10.195860   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:10.195913   48438 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:49:10.195982   48438 addons.go:70] Setting storage-provisioner=true in profile "functional-384006"
	I1212 19:49:10.195999   48438 addons.go:239] Setting addon storage-provisioner=true in "functional-384006"
	I1212 19:49:10.196020   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.196498   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.197078   48438 addons.go:70] Setting default-storageclass=true in profile "functional-384006"
	I1212 19:49:10.197104   48438 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384006"
	I1212 19:49:10.197385   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.200737   48438 out.go:179] * Verifying Kubernetes components...
	I1212 19:49:10.203657   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:10.242694   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.242850   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.243167   48438 addons.go:239] Setting addon default-storageclass=true in "functional-384006"
	I1212 19:49:10.243197   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.243613   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.244264   48438 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:49:10.248400   48438 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.248422   48438 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:49:10.248484   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.280006   48438 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:10.280027   48438 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:49:10.280091   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.292135   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.320079   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.410663   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:10.453525   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.485844   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.196335   48438 node_ready.go:35] waiting up to 6m0s for node "functional-384006" to be "Ready" ...
	I1212 19:49:11.196458   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.196510   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.196726   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196748   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196769   48438 retry.go:31] will retry after 366.342967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196806   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196817   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196823   48438 retry.go:31] will retry after 300.335318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196876   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:11.497399   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.554914   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.558623   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.558688   48438 retry.go:31] will retry after 444.117502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.563799   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:11.619827   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.623191   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.623218   48438 retry.go:31] will retry after 549.294372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.698171   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.698248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.698564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.003014   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.062616   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.066362   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.066391   48438 retry.go:31] will retry after 595.188251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.173715   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.197395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.233993   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.234039   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.234058   48438 retry.go:31] will retry after 392.030002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.626804   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.662348   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.696816   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.696944   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.697262   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.708549   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.715333   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.715413   48438 retry.go:31] will retry after 1.207907286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756481   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.756580   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756630   48438 retry.go:31] will retry after 988.700176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.197091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.197179   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:13.197567   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:13.697358   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.697464   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.697803   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:13.746091   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:13.800035   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.803463   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.803491   48438 retry.go:31] will retry after 829.308427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.923746   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:13.982211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.982249   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.982267   48438 retry.go:31] will retry after 769.179652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.196516   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.196587   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.196865   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.633627   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:14.690489   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.693763   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.693798   48438 retry.go:31] will retry after 2.844765229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.697018   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.697087   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.697405   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.752598   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:14.810008   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.810058   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.810075   48438 retry.go:31] will retry after 1.702576008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:15.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.196581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:15.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:15.697028   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:16.196951   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.197024   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.197313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:16.513895   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:16.577782   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:16.577823   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.577842   48438 retry.go:31] will retry after 3.833463827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.697311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.697616   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.197116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.538823   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:17.596746   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:17.600222   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.600249   48438 retry.go:31] will retry after 2.11378985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.696505   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.696573   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.696885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:18.196556   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:18.197023   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:18.696638   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.696729   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.196736   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.196812   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.696622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.696700   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.696961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.714208   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:19.768038   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:19.771528   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:19.771557   48438 retry.go:31] will retry after 5.800996246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.197387   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.197458   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:20.197788   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:20.412247   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:20.466933   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:20.470625   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.470653   48438 retry.go:31] will retry after 5.197371043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.697099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.697410   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.197198   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.197271   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.197569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.697116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.697371   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.197585   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.697314   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.697647   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:22.697696   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:23.197042   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.197134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:23.697049   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.697429   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.197196   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.197268   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.697001   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.697067   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.697318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.196674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.197011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:25.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:25.573546   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:25.640105   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.640150   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.640168   48438 retry.go:31] will retry after 9.327300318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.668309   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:25.696826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.696923   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.697181   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.735314   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.738857   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.738887   48438 retry.go:31] will retry after 6.705148998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:26.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.197240   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.197490   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:26.697309   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.697408   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.697729   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:27.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.197584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.197871   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:27.197919   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:27.696575   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.696952   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.196680   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.196762   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.197103   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.696675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.196525   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.196926   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:29.697067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:30.197085   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.197181   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.197519   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:30.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.197223   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.197295   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.197605   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.697429   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.697504   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.697832   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:31.697883   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:32.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:32.444273   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:32.498733   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:32.502453   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.502484   48438 retry.go:31] will retry after 9.024395099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.696884   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.696967   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.697298   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.696606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.696862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.196944   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:34.196991   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:34.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.696625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.696943   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.968441   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:35.030670   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:35.034703   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.034735   48438 retry.go:31] will retry after 11.456350697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.196975   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.197325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:35.697091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.697164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.697483   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:36.197206   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.197280   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.197576   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:36.197625   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.697108   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.197157   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.197231   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.197556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.697344   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.697421   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.697737   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.197120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.197393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.697237   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.697313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:38.697751   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:39.197495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.197574   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.197923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:39.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.696663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.696902   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.696978   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.697049   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.697369   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:41.197258   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.197327   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.197601   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:41.197683   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:41.527120   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:41.586633   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:41.590403   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.590436   48438 retry.go:31] will retry after 11.748431511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.696875   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.696951   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.697272   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.196642   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.196731   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.696550   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.696647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.696923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.196548   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.696721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.697043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:43.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:44.196771   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.196840   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.197104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:44.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.196928   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.197005   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.197335   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.696632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.696941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:46.196940   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.197010   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.197309   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:46.197362   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:46.491755   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:46.549211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:46.549254   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.549272   48438 retry.go:31] will retry after 7.577859466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.697552   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.697629   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.697924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.196597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.196927   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.696631   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.696710   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.197015   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.696726   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.697050   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:48.697099   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:49.196624   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.197019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:49.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.696695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.697125   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.197269   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.197350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.197608   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.697495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.697567   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.697901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:50.697955   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:51.196732   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.196803   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:51.696762   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.196971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:53.196528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.196891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:53.196934   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:53.339331   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:53.394698   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:53.398291   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.398322   48438 retry.go:31] will retry after 25.381584091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.696686   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.127648   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:54.185700   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:54.185751   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.185771   48438 retry.go:31] will retry after 18.076319981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.196871   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.196963   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.197226   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.696517   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.696579   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.696863   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:55.196622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.196982   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:55.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:55.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.696691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.196994   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.197059   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.697233   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.697537   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:57.197290   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.197368   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.197681   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:57.197733   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:57.697000   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.697069   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.697304   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.196651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.196993   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.696921   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:59.697380   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:00.197384   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.197775   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:00.696649   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.696725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.197059   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.197145   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.697372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.697463   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.697881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:01.697942   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:02.196547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:02.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.696670   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.196781   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.197108   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.696791   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.696860   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:04.196836   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:04.197301   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:04.696812   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.696891   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.196827   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.196904   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.696566   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:06.196948   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.197026   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.197368   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:06.197422   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:06.697024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.697393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.197202   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.197278   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.197614   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.697404   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.697475   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.697790   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.196467   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.196533   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.696513   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.696584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.696925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:08.696997   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:09.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:09.696629   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.696947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.197069   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.197157   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.697420   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.697769   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:10.697839   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:11.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.197258   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:11.697392   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.697467   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.697811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.197401   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.197473   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.197766   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.263038   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:12.317640   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:12.321089   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.321118   48438 retry.go:31] will retry after 33.331276854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:13.196651   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:13.197046   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:13.696602   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.696674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.696975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.196564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.696632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.696719   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:15.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.196715   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:15.197085   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:15.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.696791   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.697104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.197135   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.197570   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.697411   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.697489   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.697833   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.196534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.196602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.196867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.696709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.697053   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:17.697120   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:18.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.196724   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.696950   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.780412   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:18.840261   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:18.840307   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:18.840327   48438 retry.go:31] will retry after 31.549397312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:19.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.196999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:19.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:20.197071   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.197171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.197499   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:20.197554   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:20.697293   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.697711   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.197239   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.197699   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.697463   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:22.197573   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.197648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.197961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:22.198017   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:22.696673   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.697109   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.196692   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.196763   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.197088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.697041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.196735   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.197141   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.696553   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.696913   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:24.696962   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:25.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:25.696594   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.196805   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.196888   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.197147   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.696608   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:26.697078   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:27.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.197036   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:27.696717   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.696786   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.697091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.196811   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.196880   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.197204   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.696604   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:28.697101   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:29.196569   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:29.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.697016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.196809   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.196906   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.197224   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:31.196990   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.197061   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.197407   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:31.197465   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:31.697274   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.697677   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.197397   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.697173   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.697264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.697607   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:33.197434   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.197509   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.197848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:33.197901   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:33.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.696597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.696851   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.696533   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.196543   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:35.697050   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:36.197040   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.197129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.197456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:36.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.697338   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.197319   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.197399   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.197705   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.697534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.697606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.697891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:37.697935   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:38.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.197041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:38.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.196728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.197038   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.696547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.696879   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:40.197486   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.197559   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.197900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:40.197971   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:40.696503   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.696917   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.196679   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.196745   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.696662   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.696734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.697088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.196929   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.197017   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.697020   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.697350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:42.697390   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:43.197172   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.197578   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:43.697431   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.697521   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.697836   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.196857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.696646   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.697013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.196875   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.196959   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.197384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:45.197450   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:45.653170   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:45.696473   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.696544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.696768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.722043   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722078   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722170   48438 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:46.197149   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:46.697218   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.697285   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.697603   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.197403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.697075   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.697158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.697475   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:47.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:48.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.197195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:48.697105   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.697174   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.697455   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.197123   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.197523   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.697198   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.697276   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.697615   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:49.697669   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:50.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.197708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:50.390183   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:50.447451   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447486   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447560   48438 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:50.450729   48438 out.go:179] * Enabled addons: 
	I1212 19:50:50.452858   48438 addons.go:530] duration metric: took 1m40.25694205s for enable addons: enabled=[]
	I1212 19:50:50.697432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.697527   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.697885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.196816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.197159   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:52.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:52.197004   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.196675   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.196744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.696741   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.697070   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:54.196757   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.197113   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:54.197157   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:54.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.696641   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.196708   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.197136   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.696829   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.697229   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:56.197066   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.197387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:56.197429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:56.697219   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.697315   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.697648   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.197432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.197513   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.197815   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.696494   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.696561   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.696813   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.196619   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.196701   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.197024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.696727   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.697094   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:58.697138   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:59.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.196633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.196941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:59.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.197073   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.697403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:00.697447   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:01.197252   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.197345   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.197675   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:01.697471   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.697549   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.697859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:03.196694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.196766   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.197077   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:03.197130   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:03.696767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.696834   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.697143   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.196630   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.196704   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.696694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.696764   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.697055   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.196712   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.196795   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:05.697052   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:06.197016   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.197103   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.197772   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:06.696478   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.696543   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.696795   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.197509   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.197581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.197882   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.696523   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.696601   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.696891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:08.197181   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.197247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:08.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:08.697328   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.697400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.196841   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.196931   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.697005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:10.197489   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.197571   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.197956   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:10.198032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:10.696692   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.696765   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.197068   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.197318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.697124   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.697195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.697510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.197311   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.197738   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:12.697398   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:13.197118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.197189   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.197491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:13.697316   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.197091   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.197349   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.697131   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.697203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.697525   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:14.697582   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:15.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.197446   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.197768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:15.697037   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.697362   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.197215   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.197294   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.697439   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.697512   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.697826   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:16.697889   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:17.196515   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.196583   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.196839   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:17.696542   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.696615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.196616   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.196690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.197045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.696955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:19.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.196953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:19.197013   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:19.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.196767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.196839   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.696858   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.696961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.697324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:21.197132   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.197203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:21.197569   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:21.697039   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.697448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.197274   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.197346   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.197691   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.697485   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.697564   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:23.697041   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:24.196710   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.196779   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.197091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:24.696706   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.696797   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.697093   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.196644   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.196722   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.197069   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.696784   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.696867   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.697150   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:25.697198   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:26.197035   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.197360   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:26.697146   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.697218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.697508   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.197323   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.197404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.197694   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.697053   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.697134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:27.697428   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:28.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:28.697377   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.697453   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.697770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.197025   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.197094   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.197350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.697086   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.697156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:29.697528   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:30.197322   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.197752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:30.697118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.697210   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.197427   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.197859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.697428   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.697506   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.697848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:31.697924   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:32.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.196639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.696650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.696689   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.696760   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.697011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:34.196690   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.196767   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.197161   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:34.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:34.696863   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.696936   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.697252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:36.196823   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.196902   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.197231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:36.197287   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:36.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.696939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.196647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.196985   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.696709   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.696787   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.697120   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.196638   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.196961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.696706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:38.697136   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:39.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:39.696926   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.697255   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.197304   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.197713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.697536   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.697608   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.697930   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:40.697980   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:41.196802   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.196879   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.197213   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:41.696612   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.696972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.196646   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:43.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.197021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:43.197077   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:43.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.696817   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.697134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.196553   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.196885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.696676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:45.199987   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.200075   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.200389   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:45.200457   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:45.696961   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.697027   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.697297   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.197211   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.197284   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.197636   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.697445   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.697530   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.697884   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.196573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:47.697055   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:48.196488   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.196562   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.196880   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:48.696551   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.197013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.696738   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.696820   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:49.697232   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:50.197074   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.197154   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.197448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:50.697257   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.697328   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.697663   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.197618   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.697235   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.697312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.697612   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:51.697676   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:52.197406   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.197485   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.197812   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:52.696535   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.696945   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.196550   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.696615   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.697001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:54.196603   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:54.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:54.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.696831   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.697099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.196853   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.697052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:56.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.196930   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.197194   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:56.197240   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:56.696597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.697030   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.196665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.196998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.696676   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.696744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:58.697049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:59.196681   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.196753   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:59.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.696659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.197205   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.197290   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.197625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.697067   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.697141   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.697476   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:00.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:01.197420   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.197846   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:01.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.696637   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.196972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.696648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.696964   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:03.196609   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.197049   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:03.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:03.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.696832   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.697081   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.196706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.197052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.696824   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.697154   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:05.196838   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.197234   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:05.197290   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:05.696932   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.697009   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.697331   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.197237   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.197311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.697120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.697379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:07.197151   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.197514   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:07.197560   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:07.697323   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.697404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.697708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.197031   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.197097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.697148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.697227   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.697556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:09.197392   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.197784   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:09.197845   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:09.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.696616   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.696887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.196962   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.197039   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.197334   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.696717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.697024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.196847   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.196921   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.696601   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:11.697032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:12.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.196650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:12.696545   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.696869   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.696589   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.697006   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:13.697058   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:14.196560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.196946   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:14.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.697058   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.696653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:16.196956   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.197033   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.197379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:16.197433   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:16.696942   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.697013   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.697325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.197029   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.697015   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.697084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.196717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.696628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.696875   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:18.696923   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:19.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.196654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.196987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:19.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.696605   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.696921   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.196969   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.197044   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.197330   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:20.697054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:21.197019   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.197109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.197420   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:21.697065   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.697171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.197327   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.197732   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.697523   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.697602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:22.697961   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:23.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.196653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.196911   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:23.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.196615   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.696620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.696867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:25.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.196989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:25.197049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:25.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.696823   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.697176   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.197032   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.197365   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.697207   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:27.197240   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.197651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:27.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:27.696997   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.697111   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.697348   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.197218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.197538   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.697444   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.697821   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.197283   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.197351   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.197604   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.697409   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.697482   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.697829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:29.697881   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:30.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.196718   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:30.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.196917   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.196985   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.197286   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.696671   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:32.196637   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.196716   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.196973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:32.197032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:32.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.696739   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.697092   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.196825   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.697364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:34.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:34.197557   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:34.697300   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.697378   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.197158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.197415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.697129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.697418   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:36.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.197573   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:36.197628   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:36.697048   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.697374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.197145   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.697438   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.697758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.697121   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.697188   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:38.697564   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:39.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.197541   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:39.697045   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.697416   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.197422   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.197841   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:41.196830   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.197165   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:41.197208   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:41.696885   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.696962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.697302   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.197136   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.197480   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.697034   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.697359   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:43.197128   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.197206   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.197560   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:43.197616   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:43.697366   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.697437   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.197119   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.697154   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.697224   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.697554   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:45.197418   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.197622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.198043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:45.198111   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:45.696799   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.696866   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.697155   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.197330   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.197994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.696797   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.696869   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.697189   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.197254   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:47.697081   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:48.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.196659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.196981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:48.696595   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.196596   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.196668   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.696687   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.697080   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:49.697134   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:50.197041   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.197117   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.697281   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.697595   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.197221   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.197312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.197623   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.697142   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:51.697429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:52.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.197264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.197590   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:52.697371   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.697445   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.697761   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.197099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.197352   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.697175   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.697245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.697552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:53.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:54.197356   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.197428   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.197758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:54.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.697377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.197228   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.197547   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.697340   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.697762   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:55.697823   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:56.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.197494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:56.697202   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.697282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.697569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.197403   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.696985   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.697054   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.697293   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:58.196950   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.197019   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:58.197379   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:58.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.697147   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.697456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.197066   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.197315   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.697129   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.697205   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.697493   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.196851   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.196939   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.197273   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.697007   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.697073   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.697327   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:00.697369   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:01.197219   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.197300   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.197664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:01.697490   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.697570   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.196566   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.196991   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.696584   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.696978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:03.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.196736   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.197062   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:03.197116   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:03.696744   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.696816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.696667   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.696738   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.196625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.196924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.696707   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:05.697088   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:06.197081   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.197462   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:06.697010   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.697082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.696603   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:08.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.196859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:08.196896   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:08.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.696893   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.196605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.696819   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.697219   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:10.197180   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.197631   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:10.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:10.697487   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.697560   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.196944   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.197056   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.696930   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.697002   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.196909   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.196979   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.197321   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.697013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.697077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.697339   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:12.697378   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:13.197093   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.197164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.197492   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:13.697289   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.697359   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.197038   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.197112   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.197374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.697235   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.697577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:14.697635   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:15.197270   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.197347   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.197686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:15.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.697098   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.697375   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.697350   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.697752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:16.697808   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:17.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.197577   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.197829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:17.696504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.696575   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.696899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.196504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.196576   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.696900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:19.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.197008   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:19.197061   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:19.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.196979   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.197046   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.197295   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.696583   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.696657   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.696990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:21.196822   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:21.197296   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:21.696752   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.696826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.697073   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.196577   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.196688   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.696698   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.696777   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:23.697150   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:24.196817   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.196890   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.197211   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:24.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.696634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.696929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.196601   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:26.196896   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:26.197253   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:26.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.696649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.696619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.196604   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.196939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.696931   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:28.696979   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:29.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.196703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.197001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:29.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.696669   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.196961   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.197040   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:30.697048   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:31.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.197435   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:31.697109   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.697183   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.697494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.198094   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.198180   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.198485   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.697418   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.697743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:32.697798   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:33.197520   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.197607   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.197978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:33.696536   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.696612   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.196586   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.696683   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.696755   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.697071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:35.196545   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:35.196957   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:35.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.696639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.197118   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.197425   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.697036   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.697356   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:37.197167   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.197245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.197543   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:37.197597   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:37.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.697332   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.197013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.197090   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.697110   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.697196   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.697532   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:39.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.197405   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.197724   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:39.197779   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:39.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.697132   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.697395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.197348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.197427   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.197783   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.697442   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.697518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.697857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.196820   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.197188   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:41.697059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:42.199056   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.199156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.199500   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:42.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.197216   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.697051   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.697127   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.697442   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:43.697496   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:44.196983   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.197047   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.197291   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:44.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.196640   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.196734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.197134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.696663   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.696987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:46.196891   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.197246   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:46.197291   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:46.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.196530   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.196610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.696962   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.196628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.696581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.696845   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:48.696887   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:49.196542   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.196995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:49.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.196908   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.196982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.197236   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:50.697100   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:51.197065   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.197137   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.197471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:51.697096   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.697167   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.697415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.197178   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.197545   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.697252   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.697323   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.697637   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:52.697692   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:53.197047   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.197114   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.197377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:53.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.697217   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.197316   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.197626   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.697384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:55.197154   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.197226   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:55.197594   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:55.697076   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.697150   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.697464   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.197358   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.197424   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.197682   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.697448   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.697524   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.697853   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.196589   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.196672   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.197005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.696668   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.696743   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:57.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:58.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.196721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:58.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.696813   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.697128   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.196544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.196620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.696616   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:00.196755   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.196856   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.197201   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:00.197255   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:00.696903   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.696982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.697296   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.197260   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.197599   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.697267   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.697339   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:02.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.197122   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:02.197430   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:02.697170   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.697265   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.697621   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.197435   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.197849   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.696519   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.696591   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.196681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.197029   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.696731   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:04.697174   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:05.196541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:05.696572   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.196971   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.197372   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.696982   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.697050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.697313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:06.697353   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:07.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.197223   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.197552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:07.697346   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.697416   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.697736   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.197037   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.697238   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.697572   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:08.697622   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:09.197260   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.197335   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.197650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:09.697011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.697085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.197549   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.197634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.197971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.696971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:11.196845   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.196925   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.197172   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:11.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:11.696846   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.696918   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.697216   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.696933   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.197087   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.696789   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.696882   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:13.697285   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:14.196910   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.196976   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.197328   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:14.697114   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.697187   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.697517   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.197329   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.197401   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.197739   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.697022   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.697438   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:15.697494   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:16.197185   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.197263   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.197574   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:16.697365   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.697441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.197010   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.197077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.197323   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:18.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.196691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.197012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:18.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:18.696737   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.697100   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.196598   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.696701   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.696780   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.697061   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.196624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.697017   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:20.697069   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:21.196889   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.196962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.197310   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:21.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.697034   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:22.697092   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:23.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.196862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:23.696546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.696624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.696934   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.196592   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.197022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.696534   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:25.196574   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.196649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:25.197054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:25.696715   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.697123   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.197139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.697204   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.697275   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.697575   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:27.197337   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.197409   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.197721   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:27.197781   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:27.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.197230   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.197559   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.697713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.197011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.197084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.697150   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.697222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.697555   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:29.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:30.197366   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.197441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.197781   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:30.696486   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.696556   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.696811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.196900   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.196971   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.197252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.696927   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.697006   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.697340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:32.196886   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.196974   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.197251   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:32.197302   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.196682   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.696765   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.197010   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.697098   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:34.697159   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:35.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:35.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.197131   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.197248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.197583   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.697092   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:36.697376   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:37.197121   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.197202   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.197549   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:37.697356   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.697755   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.196491   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.196580   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.196847   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.696555   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:39.196611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:39.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:39.696650   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.696973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.197051   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.197510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.697434   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.697779   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:41.197126   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.197489   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:41.197543   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:41.697273   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.697678   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.197609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.197692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.198720   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.697042   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.697110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.697353   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:43.197138   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.197208   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:43.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:43.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.697491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.197018   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.197082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.197326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:45.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.196924   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.201386   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:54:45.201507   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:45.697031   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.197134   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.197531   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.697301   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.697388   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.697735   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.197422   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.697238   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.697317   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.697650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:47.697707   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:48.197476   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.197548   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.197868   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:48.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.696600   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.696881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.196620   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.197016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.696697   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.696774   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:50.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.197414   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:50.197468   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.697277   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.697625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.197524   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.197596   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.197883   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.696529   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.696602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.696953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.196625   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.196695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.197003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.696636   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.696938   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:52.696988   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:53.196618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.196689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.196965   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:53.696623   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.696694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.697045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.196759   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.196833   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.197151   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.696895   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:55.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.196967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:55.197009   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:55.696707   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.697095   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.197044   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.697176   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.697247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.697564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:57.197362   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.197770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:57.197827   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:57.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.696582   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.696850   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.196910   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.696617   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.196642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.696689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:59.697072   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:00.196536   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.196632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:00.696665   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.696746   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.197001   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.197085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.197440   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.697258   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.697671   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:01.697735   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:02.197006   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.197095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:02.697262   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.697664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.197461   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.197544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.197886   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.696539   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.696903   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.196692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:04.197059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:04.696722   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.697084   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.196619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.196920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.696654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:06.196854   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.197258   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:06.197306   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:06.696660   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.696983   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.196575   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.697039   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.196627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:08.697031   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:09.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.196785   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.197099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:09.696682   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.696750   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.196676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.197018   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.696766   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.696855   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:10.697295   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:11.196994   48438 node_ready.go:38] duration metric: took 6m0.000614517s for node "functional-384006" to be "Ready" ...
	I1212 19:55:11.200166   48438 out.go:203] 
	W1212 19:55:11.203009   48438 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 19:55:11.203186   48438 out.go:285] * 
	W1212 19:55:11.205457   48438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:55:11.208306   48438 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743801454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743817642Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743915125Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743934070Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743946164Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743958808Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743968129Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743979025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743996739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.744035590Z" level=info msg="Connect containerd service"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.744313966Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.744845178Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761811243Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761874413Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761907216Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761954886Z" level=info msg="Start recovering state"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804116204Z" level=info msg="Start event monitor"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804339477Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804436730Z" level=info msg="Start streaming server"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804520478Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804757083Z" level=info msg="runtime interface starting up..."
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804833307Z" level=info msg="starting plugins..."
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804898126Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:49:08 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.807197031Z" level=info msg="containerd successfully booted in 0.084279s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:55:13.060042    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:13.060859    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:13.063013    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:13.064240    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:13.065145    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 19:55:13 up 37 min,  0 user,  load average: 0.16, 0.24, 0.54
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 19:55:09 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:10 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 12 19:55:10 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:10 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:10 functional-384006 kubelet[8298]: E1212 19:55:10.492209    8298 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:10 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:10 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:11 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 12 19:55:11 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:11 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:11 functional-384006 kubelet[8304]: E1212 19:55:11.268504    8304 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:11 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:11 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:11 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 12 19:55:11 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:11 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:12 functional-384006 kubelet[8310]: E1212 19:55:12.021187    8310 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 12 19:55:12 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:12 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:12 functional-384006 kubelet[8331]: E1212 19:55:12.742069    8331 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (364.823288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (367.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-384006 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-384006 get po -A: exit status 1 (59.459164ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-384006 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-384006 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-384006 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (335.213557ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh sudo cat /usr/share/ca-certificates/41202.pem                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image save kicbase/echo-server:functional-008271 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ update-context │ functional-008271 update-context --alsologtostderr -v=2                                                                                                         │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image rm kicbase/echo-server:functional-008271 --alsologtostderr                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image save --daemon kicbase/echo-server:functional-008271 --alsologtostderr                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format short --alsologtostderr                                                                                                     │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format yaml --alsologtostderr                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format json --alsologtostderr                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls --format table --alsologtostderr                                                                                                     │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh            │ functional-008271 ssh pgrep buildkitd                                                                                                                           │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image          │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                          │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image          │ functional-008271 image ls                                                                                                                                      │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete         │ -p functional-008271                                                                                                                                            │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start          │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ start          │ -p functional-384006 --alsologtostderr -v=8                                                                                                                     │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:49 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:49:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:49:06.161667   48438 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:49:06.161882   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.161913   48438 out.go:374] Setting ErrFile to fd 2...
	I1212 19:49:06.161935   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.162192   48438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:49:06.162605   48438 out.go:368] Setting JSON to false
	I1212 19:49:06.163501   48438 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1896,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:49:06.163603   48438 start.go:143] virtualization:  
	I1212 19:49:06.167059   48438 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:49:06.170023   48438 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:49:06.170127   48438 notify.go:221] Checking for updates...
	I1212 19:49:06.175791   48438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:49:06.178620   48438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:06.181479   48438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:49:06.184334   48438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:49:06.187177   48438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:49:06.190472   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:06.190582   48438 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:49:06.226589   48438 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:49:06.226705   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.287038   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.278380602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.287144   48438 docker.go:319] overlay module found
	I1212 19:49:06.290214   48438 out.go:179] * Using the docker driver based on existing profile
	I1212 19:49:06.293103   48438 start.go:309] selected driver: docker
	I1212 19:49:06.293122   48438 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.293257   48438 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:49:06.293353   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.346602   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.338111982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.347001   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:06.347058   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:06.347109   48438 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.350199   48438 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:49:06.353090   48438 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:49:06.356052   48438 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:49:06.358945   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:06.359005   48438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:49:06.359039   48438 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:49:06.359049   48438 cache.go:65] Caching tarball of preloaded images
	I1212 19:49:06.359132   48438 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:49:06.359143   48438 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:49:06.359246   48438 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:49:06.377622   48438 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:49:06.377646   48438 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:49:06.377660   48438 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:49:06.377689   48438 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:49:06.377751   48438 start.go:364] duration metric: took 39.285µs to acquireMachinesLock for "functional-384006"
	I1212 19:49:06.377774   48438 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:49:06.377781   48438 fix.go:54] fixHost starting: 
	I1212 19:49:06.378037   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:06.394046   48438 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:49:06.394073   48438 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:49:06.397347   48438 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:49:06.397378   48438 machine.go:94] provisionDockerMachine start ...
	I1212 19:49:06.397470   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.413547   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.413876   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.413891   48438 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:49:06.567084   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.567107   48438 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:49:06.567205   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.584099   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.584405   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.584422   48438 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:49:06.744613   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.744691   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.765941   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.766253   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.766274   48438 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:49:06.919909   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:49:06.919937   48438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:49:06.919964   48438 ubuntu.go:190] setting up certificates
	I1212 19:49:06.919986   48438 provision.go:84] configureAuth start
	I1212 19:49:06.920046   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:06.936937   48438 provision.go:143] copyHostCerts
	I1212 19:49:06.936980   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937022   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:49:06.937035   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937107   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:49:06.937204   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937227   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:49:06.937232   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937260   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:49:06.937320   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937341   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:49:06.937354   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937380   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:49:06.937435   48438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:49:07.142288   48438 provision.go:177] copyRemoteCerts
	I1212 19:49:07.142366   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:49:07.142409   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.158934   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.267886   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 19:49:07.267945   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:49:07.284419   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 19:49:07.284477   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:49:07.301465   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 19:49:07.301546   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:49:07.318717   48438 provision.go:87] duration metric: took 398.706755ms to configureAuth
	I1212 19:49:07.318790   48438 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:49:07.319006   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:07.319035   48438 machine.go:97] duration metric: took 921.650297ms to provisionDockerMachine
	I1212 19:49:07.319058   48438 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:49:07.319080   48438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:49:07.319173   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:49:07.319238   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.336520   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.439884   48438 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:49:07.443234   48438 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:49:07.443254   48438 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:49:07.443259   48438 command_runner.go:130] > VERSION_ID="12"
	I1212 19:49:07.443263   48438 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:49:07.443268   48438 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:49:07.443272   48438 command_runner.go:130] > ID=debian
	I1212 19:49:07.443276   48438 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:49:07.443281   48438 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:49:07.443289   48438 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:49:07.443341   48438 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:49:07.443361   48438 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:49:07.443371   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:49:07.443421   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:49:07.443503   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:49:07.443510   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1212 19:49:07.443585   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:49:07.443589   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1212 19:49:07.443629   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:49:07.450818   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:07.468474   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:49:07.485034   48438 start.go:296] duration metric: took 165.952143ms for postStartSetup
	I1212 19:49:07.485111   48438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:49:07.485180   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.502057   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.604226   48438 command_runner.go:130] > 12%
	I1212 19:49:07.604746   48438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:49:07.609551   48438 command_runner.go:130] > 172G
	I1212 19:49:07.609593   48438 fix.go:56] duration metric: took 1.231809331s for fixHost
	I1212 19:49:07.609604   48438 start.go:83] releasing machines lock for "functional-384006", held for 1.231841888s
	I1212 19:49:07.609687   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:07.626230   48438 ssh_runner.go:195] Run: cat /version.json
	I1212 19:49:07.626285   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.626592   48438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:49:07.626649   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.648515   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.651511   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.751468   48438 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:49:07.751688   48438 ssh_runner.go:195] Run: systemctl --version
	I1212 19:49:07.840262   48438 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 19:49:07.843071   48438 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:49:07.843106   48438 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:49:07.843235   48438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:49:07.847707   48438 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:49:07.847791   48438 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:49:07.847870   48438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:49:07.855348   48438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:49:07.855380   48438 start.go:496] detecting cgroup driver to use...
	I1212 19:49:07.855411   48438 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:07.855473   48438 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:49:07.872745   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:49:07.888438   48438 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:49:07.888499   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:49:07.905328   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:49:07.922378   48438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:49:08.040559   48438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:49:08.153632   48438 docker.go:234] disabling docker service ...
	I1212 19:49:08.153749   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:49:08.170255   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:49:08.183563   48438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:49:08.296935   48438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:49:08.413119   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:49:08.425880   48438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:49:08.438681   48438 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:49:08.439732   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:49:08.448541   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:49:08.457430   48438 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:49:08.457506   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:49:08.466099   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.474729   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:49:08.483278   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.491712   48438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:49:08.499807   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:49:08.508171   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:49:08.517078   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:49:08.525348   48438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:49:08.531636   48438 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:49:08.532621   48438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:49:08.539615   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:08.670670   48438 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:49:08.806796   48438 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:49:08.806894   48438 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:49:08.810696   48438 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 19:49:08.810773   48438 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:49:08.810802   48438 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1212 19:49:08.810829   48438 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:08.810848   48438 command_runner.go:130] > Access: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810866   48438 command_runner.go:130] > Modify: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810881   48438 command_runner.go:130] > Change: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810904   48438 command_runner.go:130] >  Birth: -
	I1212 19:49:08.811086   48438 start.go:564] Will wait 60s for crictl version
	I1212 19:49:08.811174   48438 ssh_runner.go:195] Run: which crictl
	I1212 19:49:08.814485   48438 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:49:08.814611   48438 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:49:08.838884   48438 command_runner.go:130] > Version:  0.1.0
	I1212 19:49:08.838955   48438 command_runner.go:130] > RuntimeName:  containerd
	I1212 19:49:08.838976   48438 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 19:49:08.838997   48438 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:49:08.840776   48438 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:49:08.840864   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.863238   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.864954   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.884422   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.891508   48438 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:49:08.894468   48438 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:49:08.910430   48438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:49:08.914297   48438 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 19:49:08.914409   48438 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:49:08.914505   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:08.914560   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.938916   48438 command_runner.go:130] > {
	I1212 19:49:08.938935   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.938940   48438 command_runner.go:130] >     {
	I1212 19:49:08.938949   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.938953   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.938959   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.938962   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938967   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.938980   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.938983   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938988   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.938991   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.938995   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.938998   48438 command_runner.go:130] >     },
	I1212 19:49:08.939001   48438 command_runner.go:130] >     {
	I1212 19:49:08.939009   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.939013   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939018   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.939022   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939034   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.939038   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.939049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939053   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939056   48438 command_runner.go:130] >     },
	I1212 19:49:08.939059   48438 command_runner.go:130] >     {
	I1212 19:49:08.939066   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.939069   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939075   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.939078   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939084   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939091   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.939095   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939100   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.939104   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.939108   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939111   48438 command_runner.go:130] >     },
	I1212 19:49:08.939115   48438 command_runner.go:130] >     {
	I1212 19:49:08.939121   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.939125   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939130   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.939133   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939137   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939154   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.939157   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939161   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.939166   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939170   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939173   48438 command_runner.go:130] >       },
	I1212 19:49:08.939177   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939181   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939184   48438 command_runner.go:130] >     },
	I1212 19:49:08.939187   48438 command_runner.go:130] >     {
	I1212 19:49:08.939193   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.939200   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939206   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.939209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939213   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939220   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.939224   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939228   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.939231   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939241   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939244   48438 command_runner.go:130] >       },
	I1212 19:49:08.939248   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939252   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939254   48438 command_runner.go:130] >     },
	I1212 19:49:08.939257   48438 command_runner.go:130] >     {
	I1212 19:49:08.939264   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.939268   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939273   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.939276   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939280   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939288   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.939291   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939295   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.939299   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939302   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939305   48438 command_runner.go:130] >       },
	I1212 19:49:08.939309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939316   48438 command_runner.go:130] >     },
	I1212 19:49:08.939319   48438 command_runner.go:130] >     {
	I1212 19:49:08.939326   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.939330   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939334   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.939338   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939345   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939353   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.939356   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939360   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.939364   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939368   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939370   48438 command_runner.go:130] >     },
	I1212 19:49:08.939375   48438 command_runner.go:130] >     {
	I1212 19:49:08.939381   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.939385   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939390   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.939393   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939397   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939405   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.939408   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939412   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.939416   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939420   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939423   48438 command_runner.go:130] >       },
	I1212 19:49:08.939427   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939430   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939433   48438 command_runner.go:130] >     },
	I1212 19:49:08.939437   48438 command_runner.go:130] >     {
	I1212 19:49:08.939443   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.939447   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939452   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.939454   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939458   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939465   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.939469   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939473   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.939476   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939480   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.939486   48438 command_runner.go:130] >       },
	I1212 19:49:08.939490   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939493   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.939496   48438 command_runner.go:130] >     }
	I1212 19:49:08.939499   48438 command_runner.go:130] >   ]
	I1212 19:49:08.939502   48438 command_runner.go:130] > }
	I1212 19:49:08.940984   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.941004   48438 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:49:08.941060   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.962883   48438 command_runner.go:130] > {
	I1212 19:49:08.962905   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.962910   48438 command_runner.go:130] >     {
	I1212 19:49:08.962919   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.962924   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.962930   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.962934   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962938   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.962948   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.962955   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962964   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.962971   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.962975   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.962985   48438 command_runner.go:130] >     },
	I1212 19:49:08.962993   48438 command_runner.go:130] >     {
	I1212 19:49:08.963005   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.963012   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963017   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.963021   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963035   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.963040   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.963049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963055   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963058   48438 command_runner.go:130] >     },
	I1212 19:49:08.963064   48438 command_runner.go:130] >     {
	I1212 19:49:08.963071   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.963081   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963086   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.963090   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963104   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963113   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.963116   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963123   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.963127   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.963132   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963137   48438 command_runner.go:130] >     },
	I1212 19:49:08.963146   48438 command_runner.go:130] >     {
	I1212 19:49:08.963157   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.963170   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963175   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.963178   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963187   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963198   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.963201   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963210   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.963214   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963221   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963224   48438 command_runner.go:130] >       },
	I1212 19:49:08.963228   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963234   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963238   48438 command_runner.go:130] >     },
	I1212 19:49:08.963241   48438 command_runner.go:130] >     {
	I1212 19:49:08.963248   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.963255   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963260   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.963263   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963266   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963274   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.963281   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963285   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.963288   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963298   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963302   48438 command_runner.go:130] >       },
	I1212 19:49:08.963309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963319   48438 command_runner.go:130] >     },
	I1212 19:49:08.963322   48438 command_runner.go:130] >     {
	I1212 19:49:08.963329   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.963336   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963341   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.963344   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963348   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963356   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.963363   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963367   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.963370   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963374   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963382   48438 command_runner.go:130] >       },
	I1212 19:49:08.963389   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963393   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963396   48438 command_runner.go:130] >     },
	I1212 19:49:08.963399   48438 command_runner.go:130] >     {
	I1212 19:49:08.963406   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.963413   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963418   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.963421   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963425   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963433   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.963440   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963444   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.963448   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963452   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963455   48438 command_runner.go:130] >     },
	I1212 19:49:08.963458   48438 command_runner.go:130] >     {
	I1212 19:49:08.963465   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.963472   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963478   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.963483   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963487   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963498   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.963503   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963509   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.963515   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963518   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963521   48438 command_runner.go:130] >       },
	I1212 19:49:08.963525   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963529   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963534   48438 command_runner.go:130] >     },
	I1212 19:49:08.963537   48438 command_runner.go:130] >     {
	I1212 19:49:08.963547   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.963555   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963560   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.963566   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963570   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963580   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.963587   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963591   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.963594   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963598   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.963604   48438 command_runner.go:130] >       },
	I1212 19:49:08.963611   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963615   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.963618   48438 command_runner.go:130] >     }
	I1212 19:49:08.963621   48438 command_runner.go:130] >   ]
	I1212 19:49:08.963624   48438 command_runner.go:130] > }
	I1212 19:49:08.965735   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.965756   48438 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:49:08.965764   48438 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:49:08.965868   48438 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:49:08.965936   48438 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:49:08.990907   48438 command_runner.go:130] > {
	I1212 19:49:08.990927   48438 command_runner.go:130] >   "cniconfig": {
	I1212 19:49:08.990932   48438 command_runner.go:130] >     "Networks": [
	I1212 19:49:08.990936   48438 command_runner.go:130] >       {
	I1212 19:49:08.990942   48438 command_runner.go:130] >         "Config": {
	I1212 19:49:08.990947   48438 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 19:49:08.990980   48438 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 19:49:08.990997   48438 command_runner.go:130] >           "Plugins": [
	I1212 19:49:08.991002   48438 command_runner.go:130] >             {
	I1212 19:49:08.991010   48438 command_runner.go:130] >               "Network": {
	I1212 19:49:08.991014   48438 command_runner.go:130] >                 "ipam": {},
	I1212 19:49:08.991020   48438 command_runner.go:130] >                 "type": "loopback"
	I1212 19:49:08.991023   48438 command_runner.go:130] >               },
	I1212 19:49:08.991033   48438 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 19:49:08.991041   48438 command_runner.go:130] >             }
	I1212 19:49:08.991063   48438 command_runner.go:130] >           ],
	I1212 19:49:08.991073   48438 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 19:49:08.991080   48438 command_runner.go:130] >         },
	I1212 19:49:08.991089   48438 command_runner.go:130] >         "IFName": "lo"
	I1212 19:49:08.991095   48438 command_runner.go:130] >       }
	I1212 19:49:08.991098   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991103   48438 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 19:49:08.991106   48438 command_runner.go:130] >     "PluginDirs": [
	I1212 19:49:08.991109   48438 command_runner.go:130] >       "/opt/cni/bin"
	I1212 19:49:08.991113   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991117   48438 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 19:49:08.991135   48438 command_runner.go:130] >     "Prefix": "eth"
	I1212 19:49:08.991151   48438 command_runner.go:130] >   },
	I1212 19:49:08.991154   48438 command_runner.go:130] >   "config": {
	I1212 19:49:08.991158   48438 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 19:49:08.991171   48438 command_runner.go:130] >       "/etc/cdi",
	I1212 19:49:08.991184   48438 command_runner.go:130] >       "/var/run/cdi"
	I1212 19:49:08.991188   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991191   48438 command_runner.go:130] >     "cni": {
	I1212 19:49:08.991195   48438 command_runner.go:130] >       "binDir": "",
	I1212 19:49:08.991202   48438 command_runner.go:130] >       "binDirs": [
	I1212 19:49:08.991206   48438 command_runner.go:130] >         "/opt/cni/bin"
	I1212 19:49:08.991209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.991216   48438 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 19:49:08.991220   48438 command_runner.go:130] >       "confTemplate": "",
	I1212 19:49:08.991224   48438 command_runner.go:130] >       "ipPref": "",
	I1212 19:49:08.991227   48438 command_runner.go:130] >       "maxConfNum": 1,
	I1212 19:49:08.991231   48438 command_runner.go:130] >       "setupSerially": false,
	I1212 19:49:08.991235   48438 command_runner.go:130] >       "useInternalLoopback": false
	I1212 19:49:08.991248   48438 command_runner.go:130] >     },
	I1212 19:49:08.991264   48438 command_runner.go:130] >     "containerd": {
	I1212 19:49:08.991273   48438 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 19:49:08.991288   48438 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 19:49:08.991302   48438 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 19:49:08.991311   48438 command_runner.go:130] >       "runtimes": {
	I1212 19:49:08.991317   48438 command_runner.go:130] >         "runc": {
	I1212 19:49:08.991321   48438 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 19:49:08.991325   48438 command_runner.go:130] >           "PodAnnotations": null,
	I1212 19:49:08.991329   48438 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 19:49:08.991340   48438 command_runner.go:130] >           "cgroupWritable": false,
	I1212 19:49:08.991344   48438 command_runner.go:130] >           "cniConfDir": "",
	I1212 19:49:08.991347   48438 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 19:49:08.991351   48438 command_runner.go:130] >           "io_type": "",
	I1212 19:49:08.991366   48438 command_runner.go:130] >           "options": {
	I1212 19:49:08.991378   48438 command_runner.go:130] >             "BinaryName": "",
	I1212 19:49:08.991382   48438 command_runner.go:130] >             "CriuImagePath": "",
	I1212 19:49:08.991386   48438 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 19:49:08.991400   48438 command_runner.go:130] >             "IoGid": 0,
	I1212 19:49:08.991410   48438 command_runner.go:130] >             "IoUid": 0,
	I1212 19:49:08.991414   48438 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 19:49:08.991418   48438 command_runner.go:130] >             "Root": "",
	I1212 19:49:08.991422   48438 command_runner.go:130] >             "ShimCgroup": "",
	I1212 19:49:08.991427   48438 command_runner.go:130] >             "SystemdCgroup": false
	I1212 19:49:08.991433   48438 command_runner.go:130] >           },
	I1212 19:49:08.991439   48438 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 19:49:08.991455   48438 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 19:49:08.991461   48438 command_runner.go:130] >           "runtimePath": "",
	I1212 19:49:08.991476   48438 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 19:49:08.991487   48438 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 19:49:08.991491   48438 command_runner.go:130] >           "snapshotter": ""
	I1212 19:49:08.991503   48438 command_runner.go:130] >         }
	I1212 19:49:08.991510   48438 command_runner.go:130] >       }
	I1212 19:49:08.991513   48438 command_runner.go:130] >     },
	I1212 19:49:08.991525   48438 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 19:49:08.991540   48438 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 19:49:08.991547   48438 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 19:49:08.991554   48438 command_runner.go:130] >     "disableApparmor": false,
	I1212 19:49:08.991559   48438 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 19:49:08.991564   48438 command_runner.go:130] >     "disableProcMount": false,
	I1212 19:49:08.991583   48438 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 19:49:08.991588   48438 command_runner.go:130] >     "enableCDI": true,
	I1212 19:49:08.991603   48438 command_runner.go:130] >     "enableSelinux": false,
	I1212 19:49:08.991616   48438 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 19:49:08.991621   48438 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 19:49:08.991627   48438 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 19:49:08.991634   48438 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 19:49:08.991639   48438 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 19:49:08.991643   48438 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 19:49:08.991653   48438 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 19:49:08.991658   48438 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991662   48438 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 19:49:08.991678   48438 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991689   48438 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 19:49:08.991694   48438 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 19:49:08.991696   48438 command_runner.go:130] >   },
	I1212 19:49:08.991700   48438 command_runner.go:130] >   "features": {
	I1212 19:49:08.991704   48438 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 19:49:08.991706   48438 command_runner.go:130] >   },
	I1212 19:49:08.991710   48438 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 19:49:08.991719   48438 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991728   48438 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991732   48438 command_runner.go:130] >   "runtimeHandlers": [
	I1212 19:49:08.991735   48438 command_runner.go:130] >     {
	I1212 19:49:08.991739   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991743   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991747   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991751   48438 command_runner.go:130] >       }
	I1212 19:49:08.991759   48438 command_runner.go:130] >     },
	I1212 19:49:08.991762   48438 command_runner.go:130] >     {
	I1212 19:49:08.991766   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991770   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991774   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991796   48438 command_runner.go:130] >       },
	I1212 19:49:08.991800   48438 command_runner.go:130] >       "name": "runc"
	I1212 19:49:08.991803   48438 command_runner.go:130] >     }
	I1212 19:49:08.991807   48438 command_runner.go:130] >   ],
	I1212 19:49:08.991875   48438 command_runner.go:130] >   "status": {
	I1212 19:49:08.991889   48438 command_runner.go:130] >     "conditions": [
	I1212 19:49:08.991892   48438 command_runner.go:130] >       {
	I1212 19:49:08.991895   48438 command_runner.go:130] >         "message": "",
	I1212 19:49:08.991899   48438 command_runner.go:130] >         "reason": "",
	I1212 19:49:08.991904   48438 command_runner.go:130] >         "status": true,
	I1212 19:49:08.991918   48438 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 19:49:08.991921   48438 command_runner.go:130] >       },
	I1212 19:49:08.991925   48438 command_runner.go:130] >       {
	I1212 19:49:08.991939   48438 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 19:49:08.991955   48438 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 19:49:08.991963   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.991967   48438 command_runner.go:130] >         "type": "NetworkReady"
	I1212 19:49:08.991970   48438 command_runner.go:130] >       },
	I1212 19:49:08.991989   48438 command_runner.go:130] >       {
	I1212 19:49:08.992014   48438 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 19:49:08.992028   48438 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 19:49:08.992037   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.992042   48438 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 19:49:08.992045   48438 command_runner.go:130] >       }
	I1212 19:49:08.992058   48438 command_runner.go:130] >     ]
	I1212 19:49:08.992068   48438 command_runner.go:130] >   }
	I1212 19:49:08.992071   48438 command_runner.go:130] > }
	I1212 19:49:08.994409   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:08.994432   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:08.994453   48438 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:49:08.994474   48438 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:49:08.994579   48438 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:49:08.994644   48438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:49:09.001254   48438 command_runner.go:130] > kubeadm
	I1212 19:49:09.001273   48438 command_runner.go:130] > kubectl
	I1212 19:49:09.001277   48438 command_runner.go:130] > kubelet
	I1212 19:49:09.002097   48438 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:49:09.002172   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:49:09.009620   48438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:49:09.025282   48438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:49:09.038423   48438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 19:49:09.054506   48438 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:49:09.058001   48438 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:49:09.058066   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:09.175064   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:09.445347   48438 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:49:09.445426   48438 certs.go:195] generating shared ca certs ...
	I1212 19:49:09.445484   48438 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:09.445704   48438 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:49:09.445799   48438 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:49:09.445839   48438 certs.go:257] generating profile certs ...
	I1212 19:49:09.446025   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:49:09.446164   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:49:09.446275   48438 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:49:09.446313   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:49:09.446386   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:49:09.446438   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:49:09.446492   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:49:09.446544   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:49:09.446605   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:49:09.446663   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:49:09.446721   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:49:09.446856   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:49:09.446943   48438 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:49:09.447016   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:49:09.447074   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:49:09.447157   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:49:09.447233   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:49:09.447516   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:09.447598   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.447652   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.447686   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.448483   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:49:09.470612   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:49:09.491665   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:49:09.514138   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:49:09.535795   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:49:09.552964   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:49:09.570164   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:49:09.587343   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:49:09.604384   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:49:09.621471   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:49:09.638910   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:49:09.656615   48438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:49:09.669235   48438 ssh_runner.go:195] Run: openssl version
	I1212 19:49:09.674787   48438 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:49:09.675343   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.682988   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:49:09.690425   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.693996   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694309   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694370   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.734801   48438 command_runner.go:130] > 3ec20f2e
	I1212 19:49:09.735274   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:09.742485   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.749966   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:49:09.757755   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761677   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761712   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761771   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.803349   48438 command_runner.go:130] > b5213941
	I1212 19:49:09.803809   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:49:09.811062   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.818242   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:49:09.825568   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829043   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829382   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829462   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.872087   48438 command_runner.go:130] > 51391683
	I1212 19:49:09.872525   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:49:09.879635   48438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883004   48438 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883053   48438 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:49:09.883072   48438 command_runner.go:130] > Device: 259,1	Inode: 1317518     Links: 1
	I1212 19:49:09.883079   48438 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:09.883085   48438 command_runner.go:130] > Access: 2025-12-12 19:45:02.427863285 +0000
	I1212 19:49:09.883090   48438 command_runner.go:130] > Modify: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883095   48438 command_runner.go:130] > Change: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883100   48438 command_runner.go:130] >  Birth: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883177   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:49:09.925331   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.925758   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:49:09.966336   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.966825   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:49:10.007601   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.008047   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:49:10.052009   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.052500   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:49:10.094223   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.094385   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:49:10.136742   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.136814   48438 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:10.136904   48438 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:49:10.136973   48438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:49:10.167070   48438 cri.go:89] found id: ""
	I1212 19:49:10.167141   48438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:49:10.174626   48438 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:49:10.174649   48438 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:49:10.174663   48438 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:49:10.175405   48438 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:49:10.175423   48438 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:49:10.175476   48438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:49:10.183010   48438 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:49:10.183461   48438 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.183602   48438 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384006" cluster setting kubeconfig missing "functional-384006" context setting]
	I1212 19:49:10.183992   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.184411   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.184572   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.185056   48438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:49:10.185097   48438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:49:10.185107   48438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:49:10.185113   48438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:49:10.185120   48438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:49:10.185448   48438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:49:10.185546   48438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:49:10.194572   48438 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 19:49:10.194610   48438 kubeadm.go:602] duration metric: took 19.175488ms to restartPrimaryControlPlane
	I1212 19:49:10.194619   48438 kubeadm.go:403] duration metric: took 57.811789ms to StartCluster
	I1212 19:49:10.194633   48438 settings.go:142] acquiring lock: {Name:mk405cd0853bb1c41336dcaeeb8fe9a56ff7ca00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.194694   48438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.195302   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.195505   48438 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 19:49:10.195860   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:10.195913   48438 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:49:10.195982   48438 addons.go:70] Setting storage-provisioner=true in profile "functional-384006"
	I1212 19:49:10.195999   48438 addons.go:239] Setting addon storage-provisioner=true in "functional-384006"
	I1212 19:49:10.196020   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.196498   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.197078   48438 addons.go:70] Setting default-storageclass=true in profile "functional-384006"
	I1212 19:49:10.197104   48438 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384006"
	I1212 19:49:10.197385   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.200737   48438 out.go:179] * Verifying Kubernetes components...
	I1212 19:49:10.203657   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:10.242694   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.242850   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.243167   48438 addons.go:239] Setting addon default-storageclass=true in "functional-384006"
	I1212 19:49:10.243197   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.243613   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.244264   48438 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:49:10.248400   48438 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.248422   48438 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:49:10.248484   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.280006   48438 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:10.280027   48438 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:49:10.280091   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.292135   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.320079   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.410663   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:10.453525   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.485844   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.196335   48438 node_ready.go:35] waiting up to 6m0s for node "functional-384006" to be "Ready" ...
	I1212 19:49:11.196458   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.196510   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.196726   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196748   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196769   48438 retry.go:31] will retry after 366.342967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196806   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196817   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196823   48438 retry.go:31] will retry after 300.335318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196876   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:11.497399   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.554914   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.558623   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.558688   48438 retry.go:31] will retry after 444.117502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.563799   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:11.619827   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.623191   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.623218   48438 retry.go:31] will retry after 549.294372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.698171   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.698248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.698564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.003014   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.062616   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.066362   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.066391   48438 retry.go:31] will retry after 595.188251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.173715   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.197395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.233993   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.234039   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.234058   48438 retry.go:31] will retry after 392.030002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.626804   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.662348   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.696816   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.696944   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.697262   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.708549   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.715333   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.715413   48438 retry.go:31] will retry after 1.207907286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756481   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.756580   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756630   48438 retry.go:31] will retry after 988.700176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.197091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.197179   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:13.197567   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:13.697358   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.697464   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.697803   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:13.746091   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:13.800035   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.803463   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.803491   48438 retry.go:31] will retry after 829.308427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.923746   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:13.982211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.982249   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.982267   48438 retry.go:31] will retry after 769.179652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.196516   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.196587   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.196865   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.633627   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:14.690489   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.693763   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.693798   48438 retry.go:31] will retry after 2.844765229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.697018   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.697087   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.697405   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.752598   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:14.810008   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.810058   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.810075   48438 retry.go:31] will retry after 1.702576008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:15.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.196581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:15.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:15.697028   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:16.196951   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.197024   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.197313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:16.513895   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:16.577782   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:16.577823   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.577842   48438 retry.go:31] will retry after 3.833463827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.697311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.697616   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.197116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.538823   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:17.596746   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:17.600222   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.600249   48438 retry.go:31] will retry after 2.11378985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.696505   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.696573   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.696885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:18.196556   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:18.197023   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:18.696638   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.696729   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.196736   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.196812   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.696622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.696700   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.696961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.714208   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:19.768038   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:19.771528   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:19.771557   48438 retry.go:31] will retry after 5.800996246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.197387   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.197458   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:20.197788   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:20.412247   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:20.466933   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:20.470625   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.470653   48438 retry.go:31] will retry after 5.197371043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.697099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.697410   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.197198   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.197271   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.197569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.697116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.697371   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.197585   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.697314   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.697647   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:22.697696   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:23.197042   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.197134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:23.697049   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.697429   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.197196   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.197268   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.697001   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.697067   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.697318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.196674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.197011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:25.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:25.573546   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:25.640105   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.640150   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.640168   48438 retry.go:31] will retry after 9.327300318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.668309   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:25.696826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.696923   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.697181   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.735314   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.738857   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.738887   48438 retry.go:31] will retry after 6.705148998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:26.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.197240   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.197490   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:26.697309   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.697408   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.697729   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:27.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.197584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.197871   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:27.197919   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:27.696575   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.696952   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.196680   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.196762   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.197103   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.696675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.196525   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.196926   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:29.697067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:30.197085   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.197181   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.197519   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:30.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.197223   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.197295   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.197605   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.697429   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.697504   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.697832   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:31.697883   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:32.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:32.444273   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:32.498733   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:32.502453   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.502484   48438 retry.go:31] will retry after 9.024395099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.696884   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.696967   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.697298   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.696606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.696862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.196944   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:34.196991   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:34.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.696625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.696943   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.968441   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:35.030670   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:35.034703   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.034735   48438 retry.go:31] will retry after 11.456350697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.196975   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.197325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:35.697091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.697164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.697483   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:36.197206   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.197280   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.197576   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:36.197625   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.697108   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.197157   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.197231   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.197556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.697344   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.697421   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.697737   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.197120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.197393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.697237   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.697313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:38.697751   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:39.197495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.197574   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.197923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:39.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.696663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.696902   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.696978   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.697049   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.697369   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:41.197258   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.197327   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.197601   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:41.197683   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:41.527120   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:41.586633   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:41.590403   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.590436   48438 retry.go:31] will retry after 11.748431511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.696875   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.696951   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.697272   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.196642   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.196731   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.696550   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.696647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.696923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.196548   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.696721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.697043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:43.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:44.196771   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.196840   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.197104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:44.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.196928   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.197005   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.197335   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.696632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.696941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:46.196940   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.197010   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.197309   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:46.197362   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:46.491755   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:46.549211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:46.549254   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.549272   48438 retry.go:31] will retry after 7.577859466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.697552   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.697629   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.697924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.196597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.196927   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.696631   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.696710   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.197015   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.696726   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.697050   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:48.697099   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:49.196624   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.197019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:49.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.696695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.697125   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.197269   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.197350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.197608   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.697495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.697567   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.697901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:50.697955   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:51.196732   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.196803   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:51.696762   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.196971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:53.196528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.196891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:53.196934   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:53.339331   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:53.394698   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:53.398291   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.398322   48438 retry.go:31] will retry after 25.381584091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.696686   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.127648   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:54.185700   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:54.185751   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.185771   48438 retry.go:31] will retry after 18.076319981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.196871   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.196963   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.197226   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.696517   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.696579   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.696863   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:55.196622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.196982   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:55.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:55.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.696691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.196994   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.197059   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.697233   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.697537   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:57.197290   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.197368   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.197681   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:57.197733   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:57.697000   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.697069   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.697304   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.196651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.196993   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.696921   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:59.697380   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:00.197384   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.197775   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:00.696649   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.696725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.197059   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.197145   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.697372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.697463   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.697881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:01.697942   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:02.196547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:02.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.696670   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.196781   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.197108   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.696791   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.696860   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:04.196836   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:04.197301   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:04.696812   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.696891   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.196827   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.196904   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.696566   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:06.196948   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.197026   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.197368   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:06.197422   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:06.697024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.697393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.197202   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.197278   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.197614   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.697404   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.697475   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.697790   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.196467   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.196533   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.696513   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.696584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.696925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:08.696997   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:09.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:09.696629   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.696947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.197069   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.197157   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.697420   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.697769   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:10.697839   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:11.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.197258   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:11.697392   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.697467   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.697811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.197401   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.197473   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.197766   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.263038   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:12.317640   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:12.321089   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.321118   48438 retry.go:31] will retry after 33.331276854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:13.196651   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:13.197046   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:13.696602   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.696674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.696975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.196564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.696632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.696719   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:15.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.196715   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:15.197085   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:15.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.696791   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.697104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.197135   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.197570   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.697411   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.697489   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.697833   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.196534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.196602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.196867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.696709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.697053   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:17.697120   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:18.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.196724   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.696950   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.780412   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:18.840261   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:18.840307   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:18.840327   48438 retry.go:31] will retry after 31.549397312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:19.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.196999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:19.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:20.197071   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.197171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.197499   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:20.197554   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:20.697293   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.697711   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.197239   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.197699   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.697463   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:22.197573   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.197648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.197961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:22.198017   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:22.696673   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.697109   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.196692   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.196763   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.197088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.697041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.196735   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.197141   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.696553   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.696913   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:24.696962   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:25.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:25.696594   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.196805   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.196888   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.197147   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.696608   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:26.697078   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:27.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.197036   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:27.696717   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.696786   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.697091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.196811   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.196880   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.197204   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.696604   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:28.697101   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:29.196569   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:29.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.697016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.196809   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.196906   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.197224   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:31.196990   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.197061   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.197407   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:31.197465   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:31.697274   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.697677   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.197397   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.697173   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.697264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.697607   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:33.197434   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.197509   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.197848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:33.197901   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:33.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.696597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.696851   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.696533   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.196543   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:35.697050   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:36.197040   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.197129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.197456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:36.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.697338   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.197319   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.197399   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.197705   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.697534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.697606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.697891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:37.697935   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:38.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.197041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:38.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.196728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.197038   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.696547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.696879   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:40.197486   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.197559   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.197900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:40.197971   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:40.696503   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.696917   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.196679   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.196745   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.696662   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.696734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.697088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.196929   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.197017   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.697020   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.697350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:42.697390   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:43.197172   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.197578   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:43.697431   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.697521   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.697836   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.196857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.696646   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.697013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.196875   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.196959   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.197384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:45.197450   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:45.653170   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:45.696473   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.696544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.696768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.722043   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722078   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722170   48438 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:46.197149   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:46.697218   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.697285   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.697603   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.197403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.697075   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.697158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.697475   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:47.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:48.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.197195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:48.697105   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.697174   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.697455   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.197123   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.197523   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.697198   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.697276   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.697615   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:49.697669   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:50.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.197708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:50.390183   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:50.447451   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447486   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447560   48438 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:50.450729   48438 out.go:179] * Enabled addons: 
	I1212 19:50:50.452858   48438 addons.go:530] duration metric: took 1m40.25694205s for enable addons: enabled=[]
	I1212 19:50:50.697432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.697527   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.697885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.196816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.197159   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:52.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:52.197004   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.196675   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.196744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.696741   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.697070   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:54.196757   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.197113   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:54.197157   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:54.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.696641   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.196708   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.197136   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.696829   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.697229   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:56.197066   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.197387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:56.197429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:56.697219   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.697315   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.697648   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.197432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.197513   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.197815   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.696494   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.696561   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.696813   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.196619   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.196701   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.197024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.696727   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.697094   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:58.697138   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:59.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.196633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.196941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:59.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.197073   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.697403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:00.697447   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:01.197252   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.197345   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.197675   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:01.697471   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.697549   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.697859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:03.196694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.196766   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.197077   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:03.197130   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:03.696767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.696834   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.697143   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.196630   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.196704   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.696694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.696764   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.697055   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.196712   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.196795   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:05.697052   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:06.197016   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.197103   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.197772   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:06.696478   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.696543   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.696795   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.197509   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.197581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.197882   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.696523   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.696601   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.696891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:08.197181   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.197247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:08.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:08.697328   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.697400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.196841   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.196931   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.697005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:10.197489   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.197571   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.197956   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:10.198032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:10.696692   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.696765   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.197068   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.197318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.697124   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.697195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.697510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.197311   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.197738   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:12.697398   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:13.197118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.197189   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.197491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:13.697316   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.197091   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.197349   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.697131   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.697203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.697525   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:14.697582   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:15.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.197446   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.197768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:15.697037   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.697362   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.197215   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.197294   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.697439   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.697512   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.697826   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:16.697889   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:17.196515   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.196583   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.196839   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:17.696542   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.696615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.196616   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.196690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.197045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.696955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:19.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.196953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:19.197013   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:19.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.196767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.196839   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.696858   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.696961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.697324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:21.197132   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.197203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:21.197569   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:21.697039   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.697448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.197274   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.197346   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.197691   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.697485   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.697564   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:23.697041   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:24.196710   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.196779   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.197091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:24.696706   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.696797   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.697093   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.196644   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.196722   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.197069   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.696784   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.696867   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.697150   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:25.697198   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:26.197035   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.197360   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:26.697146   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.697218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.697508   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.197323   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.197404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.197694   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.697053   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.697134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:27.697428   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:28.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:28.697377   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.697453   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.697770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.197025   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.197094   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.197350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.697086   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.697156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:29.697528   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:30.197322   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.197752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:30.697118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.697210   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.197427   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.197859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.697428   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.697506   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.697848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:31.697924   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:32.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.196639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.696650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.696689   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.696760   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.697011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:34.196690   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.196767   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.197161   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:34.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:34.696863   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.696936   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.697252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:36.196823   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.196902   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.197231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:36.197287   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:36.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.696939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.196647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.196985   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.696709   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.696787   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.697120   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.196638   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.196961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.696706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:38.697136   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:39.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:39.696926   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.697255   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.197304   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.197713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.697536   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.697608   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.697930   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:40.697980   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:41.196802   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.196879   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.197213   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:41.696612   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.696972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.196646   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:43.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.197021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:43.197077   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:43.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.696817   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.697134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.196553   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.196885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.696676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:45.199987   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.200075   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.200389   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:45.200457   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:45.696961   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.697027   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.697297   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.197211   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.197284   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.197636   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.697445   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.697530   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.697884   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.196573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:47.697055   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:48.196488   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.196562   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.196880   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:48.696551   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.197013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.696738   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.696820   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:49.697232   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:50.197074   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.197154   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.197448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:50.697257   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.697328   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.697663   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.197618   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.697235   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.697312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.697612   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:51.697676   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:52.197406   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.197485   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.197812   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:52.696535   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.696945   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.196550   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.696615   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.697001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:54.196603   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:54.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:54.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.696831   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.697099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.196853   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.697052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:56.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.196930   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.197194   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:56.197240   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:56.696597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.697030   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.196665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.196998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.696676   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.696744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:58.697049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:59.196681   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.196753   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:59.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.696659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.197205   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.197290   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.197625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.697067   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.697141   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.697476   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:00.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:01.197420   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.197846   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:01.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.696637   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.196972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.696648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.696964   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:03.196609   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.197049   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:03.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:03.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.696832   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.697081   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.196706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.197052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.696824   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.697154   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:05.196838   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.197234   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:05.197290   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:05.696932   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.697009   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.697331   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.197237   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.197311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.697120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.697379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:07.197151   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.197514   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:07.197560   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:07.697323   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.697404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.697708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.197031   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.197097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.697148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.697227   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.697556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:09.197392   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.197784   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:09.197845   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:09.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.696616   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.696887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.196962   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.197039   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.197334   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.696717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.697024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.196847   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.196921   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.696601   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:11.697032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:12.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.196650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:12.696545   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.696869   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.696589   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.697006   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:13.697058   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:14.196560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.196946   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:14.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.697058   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.696653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:16.196956   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.197033   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.197379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:16.197433   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:16.696942   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.697013   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.697325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.197029   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.697015   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.697084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.196717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.696628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.696875   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:18.696923   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:19.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.196654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.196987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:19.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.696605   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.696921   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.196969   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.197044   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.197330   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:20.697054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:21.197019   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.197109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.197420   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:21.697065   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.697171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.197327   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.197732   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.697523   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.697602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:22.697961   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:23.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.196653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.196911   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:23.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.196615   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.696620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.696867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:25.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.196989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:25.197049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:25.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.696823   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.697176   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.197032   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.197365   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.697207   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:27.197240   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.197651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:27.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:27.696997   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.697111   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.697348   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.197218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.197538   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.697444   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.697821   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.197283   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.197351   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.197604   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.697409   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.697482   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.697829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:29.697881   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:30.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.196718   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:30.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.196917   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.196985   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.197286   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.696671   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:32.196637   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.196716   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.196973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:32.197032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:32.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.696739   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.697092   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.196825   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.697364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:34.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:34.197557   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:34.697300   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.697378   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.197158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.197415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.697129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.697418   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:36.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.197573   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:36.197628   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:36.697048   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.697374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.197145   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.697438   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.697758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.697121   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.697188   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:38.697564   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:39.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.197541   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:39.697045   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.697416   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.197422   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.197841   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:41.196830   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.197165   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:41.197208   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:41.696885   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.696962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.697302   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.197136   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.197480   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.697034   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.697359   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:43.197128   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.197206   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.197560   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:43.197616   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:43.697366   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.697437   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.197119   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.697154   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.697224   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.697554   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:45.197418   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.197622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.198043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:45.198111   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:45.696799   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.696866   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.697155   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.197330   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.197994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.696797   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.696869   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.697189   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.197254   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:47.697081   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:48.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.196659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.196981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:48.696595   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.196596   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.196668   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.696687   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.697080   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:49.697134   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:50.197041   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.197117   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.697281   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.697595   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.197221   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.197312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.197623   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.697142   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:51.697429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:52.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.197264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.197590   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:52.697371   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.697445   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.697761   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.197099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.197352   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.697175   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.697245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.697552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:53.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:54.197356   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.197428   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.197758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:54.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.697377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.197228   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.197547   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.697340   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.697762   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:55.697823   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:56.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.197494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:56.697202   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.697282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.697569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.197403   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.696985   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.697054   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.697293   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:58.196950   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.197019   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:58.197379   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:58.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.697147   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.697456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.197066   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.197315   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.697129   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.697205   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.697493   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.196851   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.196939   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.197273   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.697007   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.697073   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.697327   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:00.697369   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:01.197219   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.197300   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.197664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:01.697490   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.697570   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.196566   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.196991   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.696584   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.696978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:03.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.196736   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.197062   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:03.197116   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:03.696744   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.696816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.696667   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.696738   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.196625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.196924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.696707   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:05.697088   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:06.197081   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.197462   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:06.697010   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.697082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.696603   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:08.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.196859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:08.196896   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:08.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.696893   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.196605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.696819   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.697219   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:10.197180   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.197631   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:10.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:10.697487   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.697560   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.196944   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.197056   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.696930   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.697002   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.196909   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.196979   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.197321   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.697013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.697077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.697339   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:12.697378   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:13.197093   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.197164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.197492   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:13.697289   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.697359   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.197038   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.197112   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.197374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.697235   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.697577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:14.697635   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:15.197270   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.197347   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.197686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:15.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.697098   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.697375   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.697350   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.697752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:16.697808   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:17.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.197577   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.197829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:17.696504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.696575   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.696899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.196504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.196576   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.696900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:19.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.197008   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:19.197061   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:19.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.196979   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.197046   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.197295   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.696583   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.696657   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.696990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:21.196822   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:21.197296   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:21.696752   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.696826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.697073   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.196577   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.196688   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.696698   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.696777   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:23.697150   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:24.196817   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.196890   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.197211   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:24.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.696634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.696929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.196601   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:26.196896   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:26.197253   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:26.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.696649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.696619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.196604   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.196939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.696931   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:28.696979   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:29.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.196703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.197001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:29.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.696669   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.196961   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.197040   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:30.697048   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:31.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.197435   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:31.697109   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.697183   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.697494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.198094   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.198180   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.198485   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.697418   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.697743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:32.697798   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:33.197520   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.197607   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.197978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:33.696536   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.696612   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.196586   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.696683   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.696755   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.697071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:35.196545   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:35.196957   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:35.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.696639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.197118   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.197425   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.697036   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.697356   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:37.197167   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.197245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.197543   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:37.197597   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:37.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.697332   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.197013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.197090   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.697110   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.697196   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.697532   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:39.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.197405   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.197724   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:39.197779   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:39.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.697132   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.697395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.197348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.197427   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.197783   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.697442   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.697518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.697857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.196820   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.197188   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:41.697059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:42.199056   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.199156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.199500   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:42.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.197216   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.697051   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.697127   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.697442   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:43.697496   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:44.196983   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.197047   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.197291   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:44.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.196640   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.196734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.197134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.696663   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.696987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:46.196891   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.197246   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:46.197291   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:46.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.196530   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.196610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.696962   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.196628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.696581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.696845   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:48.696887   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:49.196542   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.196995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:49.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.196908   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.196982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.197236   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:50.697100   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:51.197065   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.197137   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.197471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:51.697096   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.697167   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.697415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.197178   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.197545   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.697252   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.697323   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.697637   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:52.697692   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:53.197047   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.197114   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.197377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:53.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.697217   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.197316   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.197626   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.697384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:55.197154   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.197226   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:55.197594   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:55.697076   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.697150   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.697464   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.197358   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.197424   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.197682   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.697448   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.697524   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.697853   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.196589   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.196672   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.197005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.696668   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.696743   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:57.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:58.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.196721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:58.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.696813   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.697128   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.196544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.196620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.696616   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:00.196755   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.196856   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.197201   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:00.197255   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:00.696903   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.696982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.697296   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.197260   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.197599   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.697267   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.697339   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:02.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.197122   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:02.197430   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:02.697170   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.697265   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.697621   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.197435   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.197849   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.696519   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.696591   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.196681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.197029   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.696731   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:04.697174   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:05.196541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:05.696572   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.196971   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.197372   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.696982   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.697050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.697313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:06.697353   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:07.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.197223   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.197552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:07.697346   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.697416   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.697736   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.197037   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.697238   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.697572   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:08.697622   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:09.197260   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.197335   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.197650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:09.697011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.697085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.197549   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.197634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.197971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.696971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:11.196845   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.196925   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.197172   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:11.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:11.696846   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.696918   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.697216   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.696933   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.197087   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.696789   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.696882   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:13.697285   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:14.196910   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.196976   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.197328   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:14.697114   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.697187   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.697517   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.197329   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.197401   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.197739   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.697022   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.697438   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:15.697494   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:16.197185   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.197263   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.197574   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:16.697365   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.697441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.197010   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.197077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.197323   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:18.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.196691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.197012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:18.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:18.696737   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.697100   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.196598   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.696701   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.696780   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.697061   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.196624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.697017   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:20.697069   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:21.196889   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.196962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.197310   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:21.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.697034   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:22.697092   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:23.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.196862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:23.696546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.696624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.696934   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.196592   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.197022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.696534   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:25.196574   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.196649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:25.197054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:25.696715   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.697123   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.197139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.697204   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.697275   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.697575   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:27.197337   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.197409   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.197721   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:27.197781   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:27.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.197230   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.197559   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.697713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.197011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.197084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.697150   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.697222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.697555   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:29.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:30.197366   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.197441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.197781   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:30.696486   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.696556   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.696811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.196900   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.196971   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.197252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.696927   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.697006   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.697340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:32.196886   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.196974   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.197251   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:32.197302   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.196682   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.696765   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.197010   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.697098   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:34.697159   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:35.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:35.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.197131   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.197248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.197583   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.697092   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:36.697376   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:37.197121   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.197202   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.197549   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:37.697356   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.697755   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.196491   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.196580   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.196847   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.696555   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:39.196611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:39.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:39.696650   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.696973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.197051   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.197510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.697434   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.697779   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:41.197126   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.197489   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:41.197543   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:41.697273   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.697678   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.197609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.197692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.198720   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.697042   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.697110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.697353   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:43.197138   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.197208   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:43.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:43.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.697491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.197018   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.197082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.197326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:45.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.196924   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.201386   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:54:45.201507   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:45.697031   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.197134   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.197531   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.697301   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.697388   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.697735   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.197422   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.697238   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.697317   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.697650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:47.697707   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:48.197476   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.197548   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.197868   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:48.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.696600   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.696881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.196620   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.197016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.696697   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.696774   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:50.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.197414   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:50.197468   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.697277   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.697625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.197524   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.197596   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.197883   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.696529   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.696602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.696953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.196625   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.196695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.197003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.696636   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.696938   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:52.696988   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:53.196618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.196689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.196965   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:53.696623   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.696694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.697045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.196759   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.196833   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.197151   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.696895   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:55.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.196967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:55.197009   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:55.696707   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.697095   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.197044   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.697176   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.697247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.697564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:57.197362   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.197770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:57.197827   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:57.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.696582   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.696850   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.196910   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.696617   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.196642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.696689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:59.697072   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:00.196536   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.196632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:00.696665   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.696746   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.197001   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.197085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.197440   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.697258   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.697671   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:01.697735   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:02.197006   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.197095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:02.697262   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.697664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.197461   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.197544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.197886   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.696539   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.696903   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.196692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:04.197059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:04.696722   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.697084   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.196619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.196920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.696654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:06.196854   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.197258   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:06.197306   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:06.696660   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.696983   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.196575   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.697039   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.196627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:08.697031   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:09.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.196785   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.197099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:09.696682   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.696750   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.196676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.197018   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.696766   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.696855   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:10.697295   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:11.196994   48438 node_ready.go:38] duration metric: took 6m0.000614517s for node "functional-384006" to be "Ready" ...
	I1212 19:55:11.200166   48438 out.go:203] 
	W1212 19:55:11.203009   48438 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 19:55:11.203186   48438 out.go:285] * 
	W1212 19:55:11.205457   48438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:55:11.208306   48438 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743801454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743817642Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743915125Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743934070Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743946164Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743958808Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743968129Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743979025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.743996739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.744035590Z" level=info msg="Connect containerd service"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.744313966Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.744845178Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761811243Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761874413Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761907216Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.761954886Z" level=info msg="Start recovering state"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804116204Z" level=info msg="Start event monitor"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804339477Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804436730Z" level=info msg="Start streaming server"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804520478Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804757083Z" level=info msg="runtime interface starting up..."
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804833307Z" level=info msg="starting plugins..."
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.804898126Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:49:08 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:49:08 functional-384006 containerd[5201]: time="2025-12-12T19:49:08.807197031Z" level=info msg="containerd successfully booted in 0.084279s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:55:15.344713    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:15.345422    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:15.347172    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:15.347735    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:15.349275    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 19:55:15 up 37 min,  0 user,  load average: 0.16, 0.24, 0.54
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 12 19:55:12 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:12 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:12 functional-384006 kubelet[8331]: E1212 19:55:12.742069    8331 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:12 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:13 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 12 19:55:13 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:13 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:13 functional-384006 kubelet[8424]: E1212 19:55:13.505125    8424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:13 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:13 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:14 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 12 19:55:14 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:14 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:14 functional-384006 kubelet[8445]: E1212 19:55:14.261032    8445 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:14 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:14 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:14 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 12 19:55:14 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:14 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:14 functional-384006 kubelet[8467]: E1212 19:55:14.993307    8467 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:14 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:14 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (385.527037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 kubectl -- --context functional-384006 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 kubectl -- --context functional-384006 get pods: exit status 1 (114.836341ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-384006 kubectl -- --context functional-384006 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (302.024457ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-008271 image ls --format short --alsologtostderr                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format yaml --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format json --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format table --alsologtostderr                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh     │ functional-008271 ssh pgrep buildkitd                                                                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image   │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                  │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls                                                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete  │ -p functional-008271                                                                                                                                    │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start   │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ start   │ -p functional-384006 --alsologtostderr -v=8                                                                                                             │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:49 UTC │                     │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:latest                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add minikube-local-cache-test:functional-384006                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache delete minikube-local-cache-test:functional-384006                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl images                                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ cache   │ functional-384006 cache reload                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ kubectl │ functional-384006 kubectl -- --context functional-384006 get pods                                                                                       │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:49:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:49:06.161667   48438 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:49:06.161882   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.161913   48438 out.go:374] Setting ErrFile to fd 2...
	I1212 19:49:06.161935   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.162192   48438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:49:06.162605   48438 out.go:368] Setting JSON to false
	I1212 19:49:06.163501   48438 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1896,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:49:06.163603   48438 start.go:143] virtualization:  
	I1212 19:49:06.167059   48438 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:49:06.170023   48438 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:49:06.170127   48438 notify.go:221] Checking for updates...
	I1212 19:49:06.175791   48438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:49:06.178620   48438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:06.181479   48438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:49:06.184334   48438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:49:06.187177   48438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:49:06.190472   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:06.190582   48438 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:49:06.226589   48438 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:49:06.226705   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.287038   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.278380602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.287144   48438 docker.go:319] overlay module found
	I1212 19:49:06.290214   48438 out.go:179] * Using the docker driver based on existing profile
	I1212 19:49:06.293103   48438 start.go:309] selected driver: docker
	I1212 19:49:06.293122   48438 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.293257   48438 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:49:06.293353   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.346602   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.338111982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.347001   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:06.347058   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:06.347109   48438 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.350199   48438 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:49:06.353090   48438 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:49:06.356052   48438 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:49:06.358945   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:06.359005   48438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:49:06.359039   48438 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:49:06.359049   48438 cache.go:65] Caching tarball of preloaded images
	I1212 19:49:06.359132   48438 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:49:06.359143   48438 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:49:06.359246   48438 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:49:06.377622   48438 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:49:06.377646   48438 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:49:06.377660   48438 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:49:06.377689   48438 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:49:06.377751   48438 start.go:364] duration metric: took 39.285µs to acquireMachinesLock for "functional-384006"
	I1212 19:49:06.377774   48438 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:49:06.377781   48438 fix.go:54] fixHost starting: 
	I1212 19:49:06.378037   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:06.394046   48438 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:49:06.394073   48438 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:49:06.397347   48438 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:49:06.397378   48438 machine.go:94] provisionDockerMachine start ...
	I1212 19:49:06.397470   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.413547   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.413876   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.413891   48438 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:49:06.567084   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.567107   48438 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:49:06.567205   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.584099   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.584405   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.584422   48438 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:49:06.744613   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.744691   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.765941   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.766253   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.766274   48438 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:49:06.919909   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:49:06.919937   48438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:49:06.919964   48438 ubuntu.go:190] setting up certificates
	I1212 19:49:06.919986   48438 provision.go:84] configureAuth start
	I1212 19:49:06.920046   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:06.936937   48438 provision.go:143] copyHostCerts
	I1212 19:49:06.936980   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937022   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:49:06.937035   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937107   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:49:06.937204   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937227   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:49:06.937232   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937260   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:49:06.937320   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937341   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:49:06.937354   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937380   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:49:06.937435   48438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:49:07.142288   48438 provision.go:177] copyRemoteCerts
	I1212 19:49:07.142366   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:49:07.142409   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.158934   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.267886   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 19:49:07.267945   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:49:07.284419   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 19:49:07.284477   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:49:07.301465   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 19:49:07.301546   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:49:07.318717   48438 provision.go:87] duration metric: took 398.706755ms to configureAuth
	I1212 19:49:07.318790   48438 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:49:07.319006   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:07.319035   48438 machine.go:97] duration metric: took 921.650297ms to provisionDockerMachine
	I1212 19:49:07.319058   48438 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:49:07.319080   48438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:49:07.319173   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:49:07.319238   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.336520   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.439884   48438 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:49:07.443234   48438 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:49:07.443254   48438 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:49:07.443259   48438 command_runner.go:130] > VERSION_ID="12"
	I1212 19:49:07.443263   48438 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:49:07.443268   48438 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:49:07.443272   48438 command_runner.go:130] > ID=debian
	I1212 19:49:07.443276   48438 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:49:07.443281   48438 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:49:07.443289   48438 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:49:07.443341   48438 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:49:07.443361   48438 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:49:07.443371   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:49:07.443421   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:49:07.443503   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:49:07.443510   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1212 19:49:07.443585   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:49:07.443589   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1212 19:49:07.443629   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:49:07.450818   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:07.468474   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:49:07.485034   48438 start.go:296] duration metric: took 165.952143ms for postStartSetup
	I1212 19:49:07.485111   48438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:49:07.485180   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.502057   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.604226   48438 command_runner.go:130] > 12%
	I1212 19:49:07.604746   48438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:49:07.609551   48438 command_runner.go:130] > 172G
	I1212 19:49:07.609593   48438 fix.go:56] duration metric: took 1.231809331s for fixHost
	I1212 19:49:07.609604   48438 start.go:83] releasing machines lock for "functional-384006", held for 1.231841888s
	I1212 19:49:07.609687   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:07.626230   48438 ssh_runner.go:195] Run: cat /version.json
	I1212 19:49:07.626285   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.626592   48438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:49:07.626649   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.648515   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.651511   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.751468   48438 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:49:07.751688   48438 ssh_runner.go:195] Run: systemctl --version
	I1212 19:49:07.840262   48438 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 19:49:07.843071   48438 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:49:07.843106   48438 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:49:07.843235   48438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:49:07.847707   48438 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:49:07.847791   48438 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:49:07.847870   48438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:49:07.855348   48438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:49:07.855380   48438 start.go:496] detecting cgroup driver to use...
	I1212 19:49:07.855411   48438 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:07.855473   48438 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:49:07.872745   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:49:07.888438   48438 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:49:07.888499   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:49:07.905328   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:49:07.922378   48438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:49:08.040559   48438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:49:08.153632   48438 docker.go:234] disabling docker service ...
	I1212 19:49:08.153749   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:49:08.170255   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:49:08.183563   48438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:49:08.296935   48438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:49:08.413119   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:49:08.425880   48438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:49:08.438681   48438 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:49:08.439732   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:49:08.448541   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:49:08.457430   48438 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:49:08.457506   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:49:08.466099   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.474729   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:49:08.483278   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.491712   48438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:49:08.499807   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:49:08.508171   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:49:08.517078   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:49:08.525348   48438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:49:08.531636   48438 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:49:08.532621   48438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:49:08.539615   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:08.670670   48438 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:49:08.806796   48438 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:49:08.806894   48438 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:49:08.810696   48438 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 19:49:08.810773   48438 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:49:08.810802   48438 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1212 19:49:08.810829   48438 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:08.810848   48438 command_runner.go:130] > Access: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810866   48438 command_runner.go:130] > Modify: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810881   48438 command_runner.go:130] > Change: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810904   48438 command_runner.go:130] >  Birth: -
	I1212 19:49:08.811086   48438 start.go:564] Will wait 60s for crictl version
	I1212 19:49:08.811174   48438 ssh_runner.go:195] Run: which crictl
	I1212 19:49:08.814485   48438 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:49:08.814611   48438 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:49:08.838884   48438 command_runner.go:130] > Version:  0.1.0
	I1212 19:49:08.838955   48438 command_runner.go:130] > RuntimeName:  containerd
	I1212 19:49:08.838976   48438 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 19:49:08.838997   48438 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:49:08.840776   48438 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:49:08.840864   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.863238   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.864954   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.884422   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.891508   48438 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:49:08.894468   48438 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:49:08.910430   48438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:49:08.914297   48438 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 19:49:08.914409   48438 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:49:08.914505   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:08.914560   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.938916   48438 command_runner.go:130] > {
	I1212 19:49:08.938935   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.938940   48438 command_runner.go:130] >     {
	I1212 19:49:08.938949   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.938953   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.938959   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.938962   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938967   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.938980   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.938983   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938988   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.938991   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.938995   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.938998   48438 command_runner.go:130] >     },
	I1212 19:49:08.939001   48438 command_runner.go:130] >     {
	I1212 19:49:08.939009   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.939013   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939018   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.939022   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939034   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.939038   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.939049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939053   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939056   48438 command_runner.go:130] >     },
	I1212 19:49:08.939059   48438 command_runner.go:130] >     {
	I1212 19:49:08.939066   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.939069   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939075   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.939078   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939084   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939091   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.939095   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939100   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.939104   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.939108   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939111   48438 command_runner.go:130] >     },
	I1212 19:49:08.939115   48438 command_runner.go:130] >     {
	I1212 19:49:08.939121   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.939125   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939130   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.939133   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939137   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939154   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.939157   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939161   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.939166   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939170   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939173   48438 command_runner.go:130] >       },
	I1212 19:49:08.939177   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939181   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939184   48438 command_runner.go:130] >     },
	I1212 19:49:08.939187   48438 command_runner.go:130] >     {
	I1212 19:49:08.939193   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.939200   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939206   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.939209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939213   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939220   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.939224   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939228   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.939231   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939241   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939244   48438 command_runner.go:130] >       },
	I1212 19:49:08.939248   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939252   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939254   48438 command_runner.go:130] >     },
	I1212 19:49:08.939257   48438 command_runner.go:130] >     {
	I1212 19:49:08.939264   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.939268   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939273   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.939276   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939280   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939288   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.939291   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939295   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.939299   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939302   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939305   48438 command_runner.go:130] >       },
	I1212 19:49:08.939309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939316   48438 command_runner.go:130] >     },
	I1212 19:49:08.939319   48438 command_runner.go:130] >     {
	I1212 19:49:08.939326   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.939330   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939334   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.939338   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939345   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939353   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.939356   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939360   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.939364   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939368   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939370   48438 command_runner.go:130] >     },
	I1212 19:49:08.939375   48438 command_runner.go:130] >     {
	I1212 19:49:08.939381   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.939385   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939390   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.939393   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939397   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939405   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.939408   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939412   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.939416   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939420   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939423   48438 command_runner.go:130] >       },
	I1212 19:49:08.939427   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939430   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939433   48438 command_runner.go:130] >     },
	I1212 19:49:08.939437   48438 command_runner.go:130] >     {
	I1212 19:49:08.939443   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.939447   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939452   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.939454   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939458   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939465   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.939469   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939473   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.939476   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939480   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.939486   48438 command_runner.go:130] >       },
	I1212 19:49:08.939490   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939493   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.939496   48438 command_runner.go:130] >     }
	I1212 19:49:08.939499   48438 command_runner.go:130] >   ]
	I1212 19:49:08.939502   48438 command_runner.go:130] > }
	I1212 19:49:08.940984   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.941004   48438 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:49:08.941060   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.962883   48438 command_runner.go:130] > {
	I1212 19:49:08.962905   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.962910   48438 command_runner.go:130] >     {
	I1212 19:49:08.962919   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.962924   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.962930   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.962934   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962938   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.962948   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.962955   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962964   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.962971   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.962975   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.962985   48438 command_runner.go:130] >     },
	I1212 19:49:08.962993   48438 command_runner.go:130] >     {
	I1212 19:49:08.963005   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.963012   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963017   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.963021   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963035   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.963040   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.963049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963055   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963058   48438 command_runner.go:130] >     },
	I1212 19:49:08.963064   48438 command_runner.go:130] >     {
	I1212 19:49:08.963071   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.963081   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963086   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.963090   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963104   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963113   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.963116   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963123   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.963127   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.963132   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963137   48438 command_runner.go:130] >     },
	I1212 19:49:08.963146   48438 command_runner.go:130] >     {
	I1212 19:49:08.963157   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.963170   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963175   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.963178   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963187   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963198   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.963201   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963210   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.963214   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963221   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963224   48438 command_runner.go:130] >       },
	I1212 19:49:08.963228   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963234   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963238   48438 command_runner.go:130] >     },
	I1212 19:49:08.963241   48438 command_runner.go:130] >     {
	I1212 19:49:08.963248   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.963255   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963260   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.963263   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963266   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963274   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.963281   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963285   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.963288   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963298   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963302   48438 command_runner.go:130] >       },
	I1212 19:49:08.963309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963319   48438 command_runner.go:130] >     },
	I1212 19:49:08.963322   48438 command_runner.go:130] >     {
	I1212 19:49:08.963329   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.963336   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963341   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.963344   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963348   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963356   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.963363   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963367   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.963370   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963374   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963382   48438 command_runner.go:130] >       },
	I1212 19:49:08.963389   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963393   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963396   48438 command_runner.go:130] >     },
	I1212 19:49:08.963399   48438 command_runner.go:130] >     {
	I1212 19:49:08.963406   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.963413   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963418   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.963421   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963425   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963433   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.963440   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963444   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.963448   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963452   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963455   48438 command_runner.go:130] >     },
	I1212 19:49:08.963458   48438 command_runner.go:130] >     {
	I1212 19:49:08.963465   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.963472   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963478   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.963483   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963487   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963498   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.963503   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963509   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.963515   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963518   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963521   48438 command_runner.go:130] >       },
	I1212 19:49:08.963525   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963529   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963534   48438 command_runner.go:130] >     },
	I1212 19:49:08.963537   48438 command_runner.go:130] >     {
	I1212 19:49:08.963547   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.963555   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963560   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.963566   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963570   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963580   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.963587   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963591   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.963594   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963598   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.963604   48438 command_runner.go:130] >       },
	I1212 19:49:08.963611   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963615   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.963618   48438 command_runner.go:130] >     }
	I1212 19:49:08.963621   48438 command_runner.go:130] >   ]
	I1212 19:49:08.963624   48438 command_runner.go:130] > }
	I1212 19:49:08.965735   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.965756   48438 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:49:08.965764   48438 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:49:08.965868   48438 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:49:08.965936   48438 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:49:08.990907   48438 command_runner.go:130] > {
	I1212 19:49:08.990927   48438 command_runner.go:130] >   "cniconfig": {
	I1212 19:49:08.990932   48438 command_runner.go:130] >     "Networks": [
	I1212 19:49:08.990936   48438 command_runner.go:130] >       {
	I1212 19:49:08.990942   48438 command_runner.go:130] >         "Config": {
	I1212 19:49:08.990947   48438 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 19:49:08.990980   48438 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 19:49:08.990997   48438 command_runner.go:130] >           "Plugins": [
	I1212 19:49:08.991002   48438 command_runner.go:130] >             {
	I1212 19:49:08.991010   48438 command_runner.go:130] >               "Network": {
	I1212 19:49:08.991014   48438 command_runner.go:130] >                 "ipam": {},
	I1212 19:49:08.991020   48438 command_runner.go:130] >                 "type": "loopback"
	I1212 19:49:08.991023   48438 command_runner.go:130] >               },
	I1212 19:49:08.991033   48438 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 19:49:08.991041   48438 command_runner.go:130] >             }
	I1212 19:49:08.991063   48438 command_runner.go:130] >           ],
	I1212 19:49:08.991073   48438 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 19:49:08.991080   48438 command_runner.go:130] >         },
	I1212 19:49:08.991089   48438 command_runner.go:130] >         "IFName": "lo"
	I1212 19:49:08.991095   48438 command_runner.go:130] >       }
	I1212 19:49:08.991098   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991103   48438 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 19:49:08.991106   48438 command_runner.go:130] >     "PluginDirs": [
	I1212 19:49:08.991109   48438 command_runner.go:130] >       "/opt/cni/bin"
	I1212 19:49:08.991113   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991117   48438 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 19:49:08.991135   48438 command_runner.go:130] >     "Prefix": "eth"
	I1212 19:49:08.991151   48438 command_runner.go:130] >   },
	I1212 19:49:08.991154   48438 command_runner.go:130] >   "config": {
	I1212 19:49:08.991158   48438 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 19:49:08.991171   48438 command_runner.go:130] >       "/etc/cdi",
	I1212 19:49:08.991184   48438 command_runner.go:130] >       "/var/run/cdi"
	I1212 19:49:08.991188   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991191   48438 command_runner.go:130] >     "cni": {
	I1212 19:49:08.991195   48438 command_runner.go:130] >       "binDir": "",
	I1212 19:49:08.991202   48438 command_runner.go:130] >       "binDirs": [
	I1212 19:49:08.991206   48438 command_runner.go:130] >         "/opt/cni/bin"
	I1212 19:49:08.991209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.991216   48438 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 19:49:08.991220   48438 command_runner.go:130] >       "confTemplate": "",
	I1212 19:49:08.991224   48438 command_runner.go:130] >       "ipPref": "",
	I1212 19:49:08.991227   48438 command_runner.go:130] >       "maxConfNum": 1,
	I1212 19:49:08.991231   48438 command_runner.go:130] >       "setupSerially": false,
	I1212 19:49:08.991235   48438 command_runner.go:130] >       "useInternalLoopback": false
	I1212 19:49:08.991248   48438 command_runner.go:130] >     },
	I1212 19:49:08.991264   48438 command_runner.go:130] >     "containerd": {
	I1212 19:49:08.991273   48438 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 19:49:08.991288   48438 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 19:49:08.991302   48438 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 19:49:08.991311   48438 command_runner.go:130] >       "runtimes": {
	I1212 19:49:08.991317   48438 command_runner.go:130] >         "runc": {
	I1212 19:49:08.991321   48438 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 19:49:08.991325   48438 command_runner.go:130] >           "PodAnnotations": null,
	I1212 19:49:08.991329   48438 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 19:49:08.991340   48438 command_runner.go:130] >           "cgroupWritable": false,
	I1212 19:49:08.991344   48438 command_runner.go:130] >           "cniConfDir": "",
	I1212 19:49:08.991347   48438 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 19:49:08.991351   48438 command_runner.go:130] >           "io_type": "",
	I1212 19:49:08.991366   48438 command_runner.go:130] >           "options": {
	I1212 19:49:08.991378   48438 command_runner.go:130] >             "BinaryName": "",
	I1212 19:49:08.991382   48438 command_runner.go:130] >             "CriuImagePath": "",
	I1212 19:49:08.991386   48438 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 19:49:08.991400   48438 command_runner.go:130] >             "IoGid": 0,
	I1212 19:49:08.991410   48438 command_runner.go:130] >             "IoUid": 0,
	I1212 19:49:08.991414   48438 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 19:49:08.991418   48438 command_runner.go:130] >             "Root": "",
	I1212 19:49:08.991422   48438 command_runner.go:130] >             "ShimCgroup": "",
	I1212 19:49:08.991427   48438 command_runner.go:130] >             "SystemdCgroup": false
	I1212 19:49:08.991433   48438 command_runner.go:130] >           },
	I1212 19:49:08.991439   48438 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 19:49:08.991455   48438 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 19:49:08.991461   48438 command_runner.go:130] >           "runtimePath": "",
	I1212 19:49:08.991476   48438 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 19:49:08.991487   48438 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 19:49:08.991491   48438 command_runner.go:130] >           "snapshotter": ""
	I1212 19:49:08.991503   48438 command_runner.go:130] >         }
	I1212 19:49:08.991510   48438 command_runner.go:130] >       }
	I1212 19:49:08.991513   48438 command_runner.go:130] >     },
	I1212 19:49:08.991525   48438 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 19:49:08.991540   48438 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 19:49:08.991547   48438 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 19:49:08.991554   48438 command_runner.go:130] >     "disableApparmor": false,
	I1212 19:49:08.991559   48438 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 19:49:08.991564   48438 command_runner.go:130] >     "disableProcMount": false,
	I1212 19:49:08.991583   48438 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 19:49:08.991588   48438 command_runner.go:130] >     "enableCDI": true,
	I1212 19:49:08.991603   48438 command_runner.go:130] >     "enableSelinux": false,
	I1212 19:49:08.991616   48438 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 19:49:08.991621   48438 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 19:49:08.991627   48438 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 19:49:08.991634   48438 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 19:49:08.991639   48438 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 19:49:08.991643   48438 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 19:49:08.991653   48438 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 19:49:08.991658   48438 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991662   48438 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 19:49:08.991678   48438 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991689   48438 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 19:49:08.991694   48438 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 19:49:08.991696   48438 command_runner.go:130] >   },
	I1212 19:49:08.991700   48438 command_runner.go:130] >   "features": {
	I1212 19:49:08.991704   48438 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 19:49:08.991706   48438 command_runner.go:130] >   },
	I1212 19:49:08.991710   48438 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 19:49:08.991719   48438 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991728   48438 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991732   48438 command_runner.go:130] >   "runtimeHandlers": [
	I1212 19:49:08.991735   48438 command_runner.go:130] >     {
	I1212 19:49:08.991739   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991743   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991747   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991751   48438 command_runner.go:130] >       }
	I1212 19:49:08.991759   48438 command_runner.go:130] >     },
	I1212 19:49:08.991762   48438 command_runner.go:130] >     {
	I1212 19:49:08.991766   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991770   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991774   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991796   48438 command_runner.go:130] >       },
	I1212 19:49:08.991800   48438 command_runner.go:130] >       "name": "runc"
	I1212 19:49:08.991803   48438 command_runner.go:130] >     }
	I1212 19:49:08.991807   48438 command_runner.go:130] >   ],
	I1212 19:49:08.991875   48438 command_runner.go:130] >   "status": {
	I1212 19:49:08.991889   48438 command_runner.go:130] >     "conditions": [
	I1212 19:49:08.991892   48438 command_runner.go:130] >       {
	I1212 19:49:08.991895   48438 command_runner.go:130] >         "message": "",
	I1212 19:49:08.991899   48438 command_runner.go:130] >         "reason": "",
	I1212 19:49:08.991904   48438 command_runner.go:130] >         "status": true,
	I1212 19:49:08.991918   48438 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 19:49:08.991921   48438 command_runner.go:130] >       },
	I1212 19:49:08.991925   48438 command_runner.go:130] >       {
	I1212 19:49:08.991939   48438 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 19:49:08.991955   48438 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 19:49:08.991963   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.991967   48438 command_runner.go:130] >         "type": "NetworkReady"
	I1212 19:49:08.991970   48438 command_runner.go:130] >       },
	I1212 19:49:08.991989   48438 command_runner.go:130] >       {
	I1212 19:49:08.992014   48438 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 19:49:08.992028   48438 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 19:49:08.992037   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.992042   48438 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 19:49:08.992045   48438 command_runner.go:130] >       }
	I1212 19:49:08.992058   48438 command_runner.go:130] >     ]
	I1212 19:49:08.992068   48438 command_runner.go:130] >   }
	I1212 19:49:08.992071   48438 command_runner.go:130] > }
	I1212 19:49:08.994409   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:08.994432   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:08.994453   48438 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:49:08.994474   48438 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:49:08.994579   48438 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:49:08.994644   48438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:49:09.001254   48438 command_runner.go:130] > kubeadm
	I1212 19:49:09.001273   48438 command_runner.go:130] > kubectl
	I1212 19:49:09.001277   48438 command_runner.go:130] > kubelet
	I1212 19:49:09.002097   48438 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:49:09.002172   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:49:09.009620   48438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:49:09.025282   48438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:49:09.038423   48438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 19:49:09.054506   48438 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:49:09.058001   48438 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:49:09.058066   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:09.175064   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:09.445347   48438 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:49:09.445426   48438 certs.go:195] generating shared ca certs ...
	I1212 19:49:09.445484   48438 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:09.445704   48438 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:49:09.445799   48438 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:49:09.445839   48438 certs.go:257] generating profile certs ...
	I1212 19:49:09.446025   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:49:09.446164   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:49:09.446275   48438 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:49:09.446313   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:49:09.446386   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:49:09.446438   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:49:09.446492   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:49:09.446544   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:49:09.446605   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:49:09.446663   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:49:09.446721   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:49:09.446856   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:49:09.446943   48438 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:49:09.447016   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:49:09.447074   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:49:09.447157   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:49:09.447233   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:49:09.447516   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:09.447598   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.447652   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.447686   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.448483   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:49:09.470612   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:49:09.491665   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:49:09.514138   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:49:09.535795   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:49:09.552964   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:49:09.570164   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:49:09.587343   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:49:09.604384   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:49:09.621471   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:49:09.638910   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:49:09.656615   48438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:49:09.669235   48438 ssh_runner.go:195] Run: openssl version
	I1212 19:49:09.674787   48438 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:49:09.675343   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.682988   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:49:09.690425   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.693996   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694309   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694370   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.734801   48438 command_runner.go:130] > 3ec20f2e
	I1212 19:49:09.735274   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:09.742485   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.749966   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:49:09.757755   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761677   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761712   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761771   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.803349   48438 command_runner.go:130] > b5213941
	I1212 19:49:09.803809   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:49:09.811062   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.818242   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:49:09.825568   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829043   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829382   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829462   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.872087   48438 command_runner.go:130] > 51391683
	I1212 19:49:09.872525   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:49:09.879635   48438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883004   48438 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883053   48438 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:49:09.883072   48438 command_runner.go:130] > Device: 259,1	Inode: 1317518     Links: 1
	I1212 19:49:09.883079   48438 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:09.883085   48438 command_runner.go:130] > Access: 2025-12-12 19:45:02.427863285 +0000
	I1212 19:49:09.883090   48438 command_runner.go:130] > Modify: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883095   48438 command_runner.go:130] > Change: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883100   48438 command_runner.go:130] >  Birth: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883177   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:49:09.925331   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.925758   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:49:09.966336   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.966825   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:49:10.007601   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.008047   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:49:10.052009   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.052500   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:49:10.094223   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.094385   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:49:10.136742   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.136814   48438 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:10.136904   48438 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:49:10.136973   48438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:49:10.167070   48438 cri.go:89] found id: ""
	I1212 19:49:10.167141   48438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:49:10.174626   48438 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:49:10.174649   48438 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:49:10.174663   48438 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:49:10.175405   48438 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:49:10.175423   48438 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:49:10.175476   48438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:49:10.183010   48438 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:49:10.183461   48438 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.183602   48438 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384006" cluster setting kubeconfig missing "functional-384006" context setting]
	I1212 19:49:10.183992   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.184411   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.184572   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.185056   48438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:49:10.185097   48438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:49:10.185107   48438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:49:10.185113   48438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:49:10.185120   48438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:49:10.185448   48438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:49:10.185546   48438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:49:10.194572   48438 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 19:49:10.194610   48438 kubeadm.go:602] duration metric: took 19.175488ms to restartPrimaryControlPlane
	I1212 19:49:10.194619   48438 kubeadm.go:403] duration metric: took 57.811789ms to StartCluster
	I1212 19:49:10.194633   48438 settings.go:142] acquiring lock: {Name:mk405cd0853bb1c41336dcaeeb8fe9a56ff7ca00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.194694   48438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.195302   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.195505   48438 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 19:49:10.195860   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:10.195913   48438 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:49:10.195982   48438 addons.go:70] Setting storage-provisioner=true in profile "functional-384006"
	I1212 19:49:10.195999   48438 addons.go:239] Setting addon storage-provisioner=true in "functional-384006"
	I1212 19:49:10.196020   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.196498   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.197078   48438 addons.go:70] Setting default-storageclass=true in profile "functional-384006"
	I1212 19:49:10.197104   48438 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384006"
	I1212 19:49:10.197385   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.200737   48438 out.go:179] * Verifying Kubernetes components...
	I1212 19:49:10.203657   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:10.242694   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.242850   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.243167   48438 addons.go:239] Setting addon default-storageclass=true in "functional-384006"
	I1212 19:49:10.243197   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.243613   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.244264   48438 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:49:10.248400   48438 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.248422   48438 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:49:10.248484   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.280006   48438 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:10.280027   48438 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:49:10.280091   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.292135   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.320079   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.410663   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:10.453525   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.485844   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.196335   48438 node_ready.go:35] waiting up to 6m0s for node "functional-384006" to be "Ready" ...
	I1212 19:49:11.196458   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.196510   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.196726   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196748   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196769   48438 retry.go:31] will retry after 366.342967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196806   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196817   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196823   48438 retry.go:31] will retry after 300.335318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196876   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:11.497399   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.554914   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.558623   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.558688   48438 retry.go:31] will retry after 444.117502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.563799   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:11.619827   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.623191   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.623218   48438 retry.go:31] will retry after 549.294372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.698171   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.698248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.698564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.003014   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.062616   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.066362   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.066391   48438 retry.go:31] will retry after 595.188251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.173715   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.197395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.233993   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.234039   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.234058   48438 retry.go:31] will retry after 392.030002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.626804   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.662348   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.696816   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.696944   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.697262   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.708549   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.715333   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.715413   48438 retry.go:31] will retry after 1.207907286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756481   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.756580   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756630   48438 retry.go:31] will retry after 988.700176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.197091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.197179   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:13.197567   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:13.697358   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.697464   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.697803   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:13.746091   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:13.800035   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.803463   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.803491   48438 retry.go:31] will retry after 829.308427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.923746   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:13.982211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.982249   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.982267   48438 retry.go:31] will retry after 769.179652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.196516   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.196587   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.196865   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.633627   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:14.690489   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.693763   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.693798   48438 retry.go:31] will retry after 2.844765229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.697018   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.697087   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.697405   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.752598   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:14.810008   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.810058   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.810075   48438 retry.go:31] will retry after 1.702576008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:15.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.196581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:15.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:15.697028   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:16.196951   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.197024   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.197313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:16.513895   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:16.577782   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:16.577823   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.577842   48438 retry.go:31] will retry after 3.833463827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.697311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.697616   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.197116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.538823   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:17.596746   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:17.600222   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.600249   48438 retry.go:31] will retry after 2.11378985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.696505   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.696573   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.696885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:18.196556   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:18.197023   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:18.696638   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.696729   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.196736   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.196812   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.696622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.696700   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.696961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.714208   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:19.768038   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:19.771528   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:19.771557   48438 retry.go:31] will retry after 5.800996246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.197387   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.197458   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:20.197788   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:20.412247   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:20.466933   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:20.470625   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.470653   48438 retry.go:31] will retry after 5.197371043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.697099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.697410   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.197198   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.197271   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.197569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.697116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.697371   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.197585   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.697314   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.697647   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:22.697696   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:23.197042   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.197134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:23.697049   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.697429   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.197196   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.197268   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.697001   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.697067   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.697318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.196674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.197011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:25.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:25.573546   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:25.640105   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.640150   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.640168   48438 retry.go:31] will retry after 9.327300318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.668309   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:25.696826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.696923   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.697181   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.735314   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.738857   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.738887   48438 retry.go:31] will retry after 6.705148998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:26.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.197240   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.197490   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:26.697309   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.697408   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.697729   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:27.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.197584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.197871   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:27.197919   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:27.696575   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.696952   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.196680   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.196762   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.197103   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.696675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.196525   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.196926   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:29.697067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:30.197085   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.197181   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.197519   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:30.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.197223   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.197295   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.197605   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.697429   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.697504   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.697832   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:31.697883   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:32.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:32.444273   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:32.498733   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:32.502453   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.502484   48438 retry.go:31] will retry after 9.024395099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.696884   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.696967   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.697298   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.696606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.696862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.196944   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:34.196991   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:34.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.696625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.696943   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.968441   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:35.030670   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:35.034703   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.034735   48438 retry.go:31] will retry after 11.456350697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.196975   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.197325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:35.697091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.697164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.697483   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:36.197206   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.197280   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.197576   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:36.197625   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.697108   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.197157   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.197231   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.197556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.697344   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.697421   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.697737   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.197120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.197393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.697237   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.697313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:38.697751   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:39.197495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.197574   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.197923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:39.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.696663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.696902   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.696978   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.697049   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.697369   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:41.197258   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.197327   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.197601   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:41.197683   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:41.527120   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:41.586633   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:41.590403   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.590436   48438 retry.go:31] will retry after 11.748431511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.696875   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.696951   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.697272   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.196642   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.196731   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.696550   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.696647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.696923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.196548   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.696721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.697043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:43.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:44.196771   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.196840   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.197104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:44.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.196928   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.197005   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.197335   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.696632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.696941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:46.196940   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.197010   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.197309   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:46.197362   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:46.491755   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:46.549211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:46.549254   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.549272   48438 retry.go:31] will retry after 7.577859466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.697552   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.697629   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.697924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.196597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.196927   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.696631   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.696710   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.197015   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.696726   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.697050   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:48.697099   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:49.196624   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.197019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:49.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.696695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.697125   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.197269   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.197350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.197608   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.697495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.697567   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.697901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:50.697955   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:51.196732   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.196803   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:51.696762   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.196971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:53.196528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.196891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:53.196934   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:53.339331   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:53.394698   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:53.398291   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.398322   48438 retry.go:31] will retry after 25.381584091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.696686   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.127648   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:54.185700   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:54.185751   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.185771   48438 retry.go:31] will retry after 18.076319981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.196871   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.196963   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.197226   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.696517   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.696579   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.696863   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:55.196622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.196982   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:55.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:55.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.696691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.196994   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.197059   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.697233   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.697537   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:57.197290   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.197368   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.197681   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:57.197733   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:57.697000   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.697069   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.697304   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.196651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.196993   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.696921   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:59.697380   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:00.197384   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.197775   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:00.696649   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.696725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.197059   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.197145   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.697372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.697463   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.697881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:01.697942   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:02.196547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:02.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.696670   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.196781   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.197108   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.696791   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.696860   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:04.196836   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:04.197301   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:04.696812   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.696891   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.196827   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.196904   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.696566   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:06.196948   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.197026   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.197368   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:06.197422   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:06.697024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.697393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.197202   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.197278   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.197614   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.697404   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.697475   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.697790   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.196467   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.196533   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.696513   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.696584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.696925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:08.696997   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:09.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:09.696629   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.696947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.197069   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.197157   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.697420   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.697769   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:10.697839   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:11.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.197258   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:11.697392   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.697467   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.697811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.197401   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.197473   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.197766   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.263038   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:12.317640   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:12.321089   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.321118   48438 retry.go:31] will retry after 33.331276854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:13.196651   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:13.197046   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:13.696602   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.696674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.696975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.196564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.696632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.696719   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:15.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.196715   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:15.197085   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:15.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.696791   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.697104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.197135   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.197570   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.697411   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.697489   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.697833   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.196534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.196602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.196867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.696709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.697053   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:17.697120   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:18.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.196724   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.696950   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.780412   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:18.840261   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:18.840307   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:18.840327   48438 retry.go:31] will retry after 31.549397312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:19.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.196999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:19.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:20.197071   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.197171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.197499   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:20.197554   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:20.697293   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.697711   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.197239   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.197699   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.697463   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:22.197573   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.197648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.197961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:22.198017   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:22.696673   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.697109   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.196692   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.196763   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.197088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.697041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.196735   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.197141   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.696553   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.696913   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:24.696962   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:25.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:25.696594   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.196805   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.196888   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.197147   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.696608   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:26.697078   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:27.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.197036   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:27.696717   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.696786   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.697091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.196811   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.196880   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.197204   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.696604   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:28.697101   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:29.196569   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:29.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.697016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.196809   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.196906   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.197224   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:31.196990   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.197061   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.197407   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:31.197465   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:31.697274   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.697677   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.197397   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.697173   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.697264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.697607   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:33.197434   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.197509   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.197848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:33.197901   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:33.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.696597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.696851   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.696533   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.196543   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:35.697050   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:36.197040   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.197129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.197456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:36.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.697338   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.197319   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.197399   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.197705   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.697534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.697606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.697891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:37.697935   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:38.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.197041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:38.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.196728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.197038   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.696547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.696879   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:40.197486   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.197559   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.197900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:40.197971   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:40.696503   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.696917   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.196679   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.196745   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.696662   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.696734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.697088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.196929   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.197017   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.697020   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.697350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:42.697390   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:43.197172   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.197578   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:43.697431   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.697521   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.697836   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.196857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.696646   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.697013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.196875   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.196959   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.197384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:45.197450   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:45.653170   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:45.696473   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.696544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.696768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.722043   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722078   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722170   48438 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:46.197149   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:46.697218   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.697285   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.697603   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.197403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.697075   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.697158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.697475   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:47.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:48.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.197195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:48.697105   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.697174   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.697455   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.197123   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.197523   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.697198   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.697276   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.697615   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:49.697669   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:50.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.197708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:50.390183   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:50.447451   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447486   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447560   48438 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:50.450729   48438 out.go:179] * Enabled addons: 
	I1212 19:50:50.452858   48438 addons.go:530] duration metric: took 1m40.25694205s for enable addons: enabled=[]
	I1212 19:50:50.697432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.697527   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.697885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.196816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.197159   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:52.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:52.197004   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.196675   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.196744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.696741   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.697070   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:54.196757   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.197113   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:54.197157   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:54.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.696641   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.196708   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.197136   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.696829   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.697229   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:56.197066   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.197387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:56.197429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:56.697219   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.697315   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.697648   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.197432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.197513   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.197815   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.696494   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.696561   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.696813   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.196619   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.196701   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.197024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.696727   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.697094   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:58.697138   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:59.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.196633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.196941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:59.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.197073   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.697403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:00.697447   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:01.197252   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.197345   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.197675   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:01.697471   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.697549   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.697859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:03.196694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.196766   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.197077   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:03.197130   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:03.696767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.696834   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.697143   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.196630   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.196704   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.696694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.696764   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.697055   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.196712   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.196795   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:05.697052   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:06.197016   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.197103   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.197772   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:06.696478   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.696543   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.696795   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.197509   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.197581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.197882   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.696523   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.696601   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.696891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:08.197181   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.197247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:08.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:08.697328   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.697400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.196841   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.196931   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.697005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:10.197489   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.197571   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.197956   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:10.198032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:10.696692   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.696765   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.197068   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.197318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.697124   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.697195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.697510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.197311   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.197738   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:12.697398   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:13.197118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.197189   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.197491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:13.697316   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.197091   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.197349   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.697131   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.697203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.697525   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:14.697582   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:15.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.197446   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.197768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:15.697037   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.697362   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.197215   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.197294   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.697439   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.697512   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.697826   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:16.697889   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:17.196515   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.196583   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.196839   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:17.696542   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.696615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.196616   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.196690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.197045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.696955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:19.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.196953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:19.197013   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:19.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.196767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.196839   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.696858   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.696961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.697324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:21.197132   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.197203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:21.197569   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:21.697039   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.697448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.197274   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.197346   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.197691   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.697485   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.697564   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:23.697041   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:24.196710   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.196779   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.197091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:24.696706   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.696797   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.697093   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.196644   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.196722   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.197069   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.696784   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.696867   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.697150   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:25.697198   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:26.197035   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.197360   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:26.697146   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.697218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.697508   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.197323   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.197404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.197694   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.697053   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.697134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:27.697428   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:28.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:28.697377   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.697453   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.697770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.197025   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.197094   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.197350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.697086   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.697156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:29.697528   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:30.197322   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.197752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:30.697118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.697210   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.197427   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.197859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.697428   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.697506   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.697848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:31.697924   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:32.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.196639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.696650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.696689   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.696760   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.697011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:34.196690   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.196767   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.197161   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:34.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:34.696863   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.696936   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.697252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:36.196823   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.196902   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.197231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:36.197287   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:36.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.696939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.196647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.196985   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.696709   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.696787   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.697120   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.196638   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.196961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.696706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:38.697136   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:39.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:39.696926   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.697255   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.197304   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.197713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.697536   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.697608   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.697930   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:40.697980   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:41.196802   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.196879   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.197213   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:41.696612   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.696972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.196646   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:43.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.197021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:43.197077   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:43.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.696817   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.697134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.196553   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.196885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.696676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:45.199987   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.200075   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.200389   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:45.200457   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:45.696961   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.697027   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.697297   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.197211   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.197284   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.197636   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.697445   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.697530   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.697884   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.196573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:47.697055   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:48.196488   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.196562   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.196880   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:48.696551   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.197013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.696738   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.696820   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:49.697232   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:50.197074   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.197154   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.197448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:50.697257   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.697328   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.697663   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.197618   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.697235   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.697312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.697612   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:51.697676   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:52.197406   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.197485   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.197812   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:52.696535   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.696945   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.196550   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.696615   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.697001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:54.196603   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:54.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:54.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.696831   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.697099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.196853   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.697052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:56.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.196930   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.197194   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:56.197240   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:56.696597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.697030   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.196665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.196998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.696676   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.696744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:58.697049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:59.196681   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.196753   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:59.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.696659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.197205   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.197290   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.197625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.697067   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.697141   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.697476   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:00.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:01.197420   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.197846   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:01.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.696637   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.196972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.696648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.696964   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:03.196609   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.197049   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:03.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:03.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.696832   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.697081   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.196706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.197052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.696824   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.697154   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:05.196838   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.197234   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:05.197290   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:05.696932   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.697009   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.697331   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.197237   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.197311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.697120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.697379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:07.197151   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.197514   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:07.197560   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:07.697323   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.697404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.697708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.197031   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.197097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.697148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.697227   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.697556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:09.197392   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.197784   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:09.197845   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:09.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.696616   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.696887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.196962   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.197039   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.197334   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.696717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.697024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.196847   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.196921   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.696601   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:11.697032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:12.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.196650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:12.696545   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.696869   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.696589   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.697006   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:13.697058   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:14.196560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.196946   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:14.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.697058   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.696653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:16.196956   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.197033   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.197379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:16.197433   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:16.696942   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.697013   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.697325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.197029   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.697015   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.697084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.196717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.696628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.696875   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:18.696923   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:19.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.196654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.196987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:19.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.696605   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.696921   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.196969   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.197044   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.197330   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:20.697054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:21.197019   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.197109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.197420   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:21.697065   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.697171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.197327   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.197732   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.697523   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.697602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:22.697961   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:23.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.196653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.196911   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:23.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.196615   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.696620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.696867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:25.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.196989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:25.197049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:25.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.696823   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.697176   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.197032   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.197365   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.697207   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:27.197240   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.197651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:27.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:27.696997   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.697111   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.697348   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.197218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.197538   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.697444   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.697821   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.197283   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.197351   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.197604   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.697409   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.697482   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.697829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:29.697881   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:30.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.196718   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:30.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.196917   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.196985   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.197286   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.696671   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:32.196637   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.196716   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.196973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:32.197032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:32.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.696739   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.697092   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.196825   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.697364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:34.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:34.197557   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:34.697300   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.697378   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.197158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.197415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.697129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.697418   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:36.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.197573   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:36.197628   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:36.697048   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.697374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.197145   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.697438   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.697758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.697121   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.697188   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:38.697564   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:39.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.197541   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:39.697045   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.697416   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.197422   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.197841   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:41.196830   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.197165   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:41.197208   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:41.696885   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.696962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.697302   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.197136   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.197480   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.697034   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.697359   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:43.197128   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.197206   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.197560   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:43.197616   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:43.697366   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.697437   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.197119   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.697154   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.697224   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.697554   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:45.197418   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.197622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.198043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:45.198111   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:45.696799   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.696866   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.697155   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.197330   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.197994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.696797   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.696869   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.697189   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.197254   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:47.697081   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:48.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.196659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.196981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:48.696595   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.196596   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.196668   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.696687   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.697080   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:49.697134   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:50.197041   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.197117   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.697281   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.697595   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.197221   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.197312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.197623   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.697142   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:51.697429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:52.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.197264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.197590   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:52.697371   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.697445   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.697761   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.197099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.197352   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.697175   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.697245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.697552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:53.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:54.197356   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.197428   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.197758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:54.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.697377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.197228   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.197547   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.697340   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.697762   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:55.697823   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:56.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.197494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:56.697202   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.697282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.697569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.197403   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.696985   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.697054   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.697293   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:58.196950   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.197019   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:58.197379   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:58.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.697147   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.697456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.197066   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.197315   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.697129   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.697205   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.697493   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.196851   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.196939   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.197273   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.697007   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.697073   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.697327   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:00.697369   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:01.197219   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.197300   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.197664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:01.697490   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.697570   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.196566   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.196991   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.696584   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.696978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:03.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.196736   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.197062   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:03.197116   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:03.696744   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.696816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.696667   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.696738   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.196625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.196924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.696707   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:05.697088   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:06.197081   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.197462   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:06.697010   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.697082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.696603   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:08.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.196859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:08.196896   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:08.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.696893   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.196605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.696819   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.697219   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:10.197180   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.197631   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:10.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:10.697487   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.697560   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.196944   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.197056   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.696930   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.697002   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.196909   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.196979   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.197321   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.697013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.697077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.697339   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:12.697378   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:13.197093   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.197164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.197492   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:13.697289   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.697359   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.197038   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.197112   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.197374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.697235   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.697577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:14.697635   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:15.197270   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.197347   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.197686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:15.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.697098   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.697375   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.697350   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.697752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:16.697808   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:17.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.197577   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.197829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:17.696504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.696575   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.696899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.196504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.196576   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.696900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:19.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.197008   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:19.197061   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:19.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.196979   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.197046   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.197295   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.696583   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.696657   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.696990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:21.196822   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:21.197296   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:21.696752   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.696826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.697073   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.196577   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.196688   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.696698   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.696777   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:23.697150   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:24.196817   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.196890   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.197211   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:24.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.696634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.696929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.196601   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:26.196896   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:26.197253   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:26.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.696649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.696619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.196604   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.196939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.696931   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:28.696979   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:29.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.196703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.197001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:29.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.696669   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.196961   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.197040   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:30.697048   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:31.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.197435   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:31.697109   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.697183   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.697494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.198094   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.198180   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.198485   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.697418   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.697743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:32.697798   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:33.197520   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.197607   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.197978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:33.696536   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.696612   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.196586   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.696683   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.696755   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.697071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:35.196545   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:35.196957   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:35.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.696639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.197118   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.197425   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.697036   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.697356   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:37.197167   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.197245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.197543   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:37.197597   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:37.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.697332   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.197013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.197090   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.697110   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.697196   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.697532   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:39.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.197405   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.197724   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:39.197779   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:39.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.697132   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.697395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.197348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.197427   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.197783   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.697442   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.697518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.697857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.196820   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.197188   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:41.697059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:42.199056   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.199156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.199500   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:42.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.197216   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.697051   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.697127   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.697442   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:43.697496   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:44.196983   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.197047   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.197291   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:44.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.196640   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.196734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.197134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.696663   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.696987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:46.196891   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.197246   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:46.197291   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:46.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.196530   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.196610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.696962   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.196628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.696581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.696845   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:48.696887   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:49.196542   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.196995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:49.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.196908   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.196982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.197236   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:50.697100   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:51.197065   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.197137   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.197471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:51.697096   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.697167   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.697415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.197178   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.197545   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.697252   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.697323   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.697637   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:52.697692   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:53.197047   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.197114   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.197377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:53.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.697217   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.197316   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.197626   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.697384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:55.197154   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.197226   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:55.197594   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:55.697076   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.697150   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.697464   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.197358   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.197424   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.197682   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.697448   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.697524   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.697853   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.196589   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.196672   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.197005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.696668   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.696743   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:57.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:58.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.196721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:58.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.696813   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.697128   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.196544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.196620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.696616   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:00.196755   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.196856   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.197201   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:00.197255   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:00.696903   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.696982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.697296   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.197260   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.197599   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.697267   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.697339   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:02.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.197122   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:02.197430   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:02.697170   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.697265   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.697621   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.197435   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.197849   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.696519   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.696591   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.196681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.197029   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.696731   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:04.697174   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:05.196541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:05.696572   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.196971   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.197372   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.696982   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.697050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.697313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:06.697353   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:07.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.197223   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.197552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:07.697346   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.697416   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.697736   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.197037   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.697238   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.697572   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:08.697622   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:09.197260   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.197335   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.197650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:09.697011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.697085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.197549   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.197634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.197971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.696971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:11.196845   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.196925   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.197172   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:11.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:11.696846   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.696918   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.697216   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.696933   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.197087   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.696789   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.696882   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:13.697285   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:14.196910   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.196976   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.197328   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:14.697114   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.697187   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.697517   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.197329   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.197401   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.197739   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.697022   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.697438   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:15.697494   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:16.197185   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.197263   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.197574   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:16.697365   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.697441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.197010   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.197077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.197323   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:18.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.196691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.197012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:18.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:18.696737   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.697100   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.196598   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.696701   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.696780   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.697061   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.196624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.697017   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:20.697069   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:21.196889   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.196962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.197310   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:21.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.697034   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:22.697092   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:23.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.196862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:23.696546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.696624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.696934   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.196592   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.197022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.696534   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:25.196574   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.196649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:25.197054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:25.696715   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.697123   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.197139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.697204   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.697275   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.697575   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:27.197337   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.197409   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.197721   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:27.197781   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:27.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.197230   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.197559   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.697713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.197011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.197084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.697150   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.697222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.697555   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:29.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:30.197366   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.197441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.197781   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:30.696486   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.696556   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.696811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.196900   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.196971   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.197252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.696927   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.697006   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.697340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:32.196886   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.196974   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.197251   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:32.197302   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.196682   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.696765   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.197010   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.697098   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:34.697159   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:35.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:35.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.197131   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.197248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.197583   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.697092   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:36.697376   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:37.197121   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.197202   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.197549   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:37.697356   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.697755   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.196491   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.196580   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.196847   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.696555   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:39.196611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:39.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:39.696650   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.696973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.197051   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.197510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.697434   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.697779   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:41.197126   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.197489   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:41.197543   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:41.697273   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.697678   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.197609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.197692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.198720   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.697042   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.697110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.697353   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:43.197138   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.197208   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:43.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:43.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.697491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.197018   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.197082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.197326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:45.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.196924   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.201386   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:54:45.201507   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:45.697031   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.197134   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.197531   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.697301   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.697388   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.697735   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.197422   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.697238   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.697317   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.697650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:47.697707   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:48.197476   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.197548   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.197868   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:48.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.696600   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.696881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.196620   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.197016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.696697   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.696774   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:50.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.197414   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:50.197468   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.697277   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.697625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.197524   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.197596   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.197883   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.696529   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.696602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.696953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.196625   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.196695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.197003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.696636   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.696938   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:52.696988   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:53.196618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.196689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.196965   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:53.696623   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.696694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.697045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.196759   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.196833   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.197151   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.696895   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:55.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.196967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:55.197009   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:55.696707   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.697095   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.197044   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.697176   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.697247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.697564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:57.197362   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.197770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:57.197827   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:57.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.696582   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.696850   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.196910   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.696617   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.196642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.696689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:59.697072   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:00.196536   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.196632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:00.696665   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.696746   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.197001   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.197085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.197440   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.697258   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.697671   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:01.697735   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:02.197006   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.197095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:02.697262   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.697664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.197461   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.197544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.197886   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.696539   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.696903   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.196692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:04.197059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:04.696722   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.697084   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.196619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.196920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.696654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:06.196854   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.197258   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:06.197306   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:06.696660   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.696983   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.196575   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.697039   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.196627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:08.697031   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:09.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.196785   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.197099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:09.696682   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.696750   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.196676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.197018   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.696766   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.696855   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:10.697295   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:11.196994   48438 node_ready.go:38] duration metric: took 6m0.000614517s for node "functional-384006" to be "Ready" ...
	I1212 19:55:11.200166   48438 out.go:203] 
	W1212 19:55:11.203009   48438 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 19:55:11.203186   48438 out.go:285] * 
	W1212 19:55:11.205457   48438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:55:11.208306   48438 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:18 functional-384006 containerd[5201]: time="2025-12-12T19:55:18.619140208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.705697748Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.707955408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.719148065Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.719536310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.684590079Z" level=info msg="No images store for sha256:4e39f883043c3ea8a37d0151562bc1cf505db5f8a8ba3972284f6e3644631f36"
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.686788434Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-384006\""
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.693309100Z" level=info msg="ImageCreate event name:\"sha256:5661f32bede572b676872cc804975f90ff6296cfb902f98dcfd0a018d5cab590\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.696414976Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-384006\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.489496865Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.492043704Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.494111045Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.505642415Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.560550825Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.563176029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.571298837Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.571997340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.594263603Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.596664232Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.598641607Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.606795913Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.732339413Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.734512440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.741473287Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.741895607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:55:24.453239    9183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:24.453775    9183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:24.455258    9183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:24.455692    9183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:24.457147    9183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 19:55:24 up 37 min,  0 user,  load average: 0.13, 0.23, 0.54
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 19:55:21 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:21 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 12 19:55:21 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:21 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:21 functional-384006 kubelet[8934]: E1212 19:55:21.748286    8934 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:21 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:21 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:22 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 12 19:55:22 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:22 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:22 functional-384006 kubelet[9020]: E1212 19:55:22.499184    9020 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:22 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:22 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:23 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 12 19:55:23 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:23 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:23 functional-384006 kubelet[9079]: E1212 19:55:23.252351    9079 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:23 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:23 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:23 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 12 19:55:23 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:23 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:24 functional-384006 kubelet[9100]: E1212 19:55:24.004007    9100 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (399.686736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-384006 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-384006 get pods: exit status 1 (109.252656ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-384006 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (319.836275ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-008271 image ls --format short --alsologtostderr                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format yaml --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format json --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format table --alsologtostderr                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh     │ functional-008271 ssh pgrep buildkitd                                                                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image   │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                  │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls                                                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete  │ -p functional-008271                                                                                                                                    │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start   │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ start   │ -p functional-384006 --alsologtostderr -v=8                                                                                                             │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:49 UTC │                     │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:latest                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add minikube-local-cache-test:functional-384006                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache delete minikube-local-cache-test:functional-384006                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl images                                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ cache   │ functional-384006 cache reload                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ kubectl │ functional-384006 kubectl -- --context functional-384006 get pods                                                                                       │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:49:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:49:06.161667   48438 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:49:06.161882   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.161913   48438 out.go:374] Setting ErrFile to fd 2...
	I1212 19:49:06.161935   48438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:49:06.162192   48438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:49:06.162605   48438 out.go:368] Setting JSON to false
	I1212 19:49:06.163501   48438 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1896,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:49:06.163603   48438 start.go:143] virtualization:  
	I1212 19:49:06.167059   48438 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:49:06.170023   48438 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:49:06.170127   48438 notify.go:221] Checking for updates...
	I1212 19:49:06.175791   48438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:49:06.178620   48438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:06.181479   48438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:49:06.184334   48438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:49:06.187177   48438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:49:06.190472   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:06.190582   48438 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:49:06.226589   48438 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:49:06.226705   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.287038   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.278380602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.287144   48438 docker.go:319] overlay module found
	I1212 19:49:06.290214   48438 out.go:179] * Using the docker driver based on existing profile
	I1212 19:49:06.293103   48438 start.go:309] selected driver: docker
	I1212 19:49:06.293122   48438 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.293257   48438 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:49:06.293353   48438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:49:06.346602   48438 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 19:49:06.338111982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:49:06.347001   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:06.347058   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:06.347109   48438 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:06.350199   48438 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:49:06.353090   48438 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:49:06.356052   48438 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:49:06.358945   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:06.359005   48438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:49:06.359039   48438 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:49:06.359049   48438 cache.go:65] Caching tarball of preloaded images
	I1212 19:49:06.359132   48438 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:49:06.359143   48438 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:49:06.359246   48438 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:49:06.377622   48438 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:49:06.377646   48438 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:49:06.377660   48438 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:49:06.377689   48438 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:49:06.377751   48438 start.go:364] duration metric: took 39.285µs to acquireMachinesLock for "functional-384006"
	I1212 19:49:06.377774   48438 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:49:06.377781   48438 fix.go:54] fixHost starting: 
	I1212 19:49:06.378037   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:06.394046   48438 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:49:06.394073   48438 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:49:06.397347   48438 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:49:06.397378   48438 machine.go:94] provisionDockerMachine start ...
	I1212 19:49:06.397470   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.413547   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.413876   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.413891   48438 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:49:06.567084   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.567107   48438 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:49:06.567205   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.584099   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.584405   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.584422   48438 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:49:06.744613   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:49:06.744691   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:06.765941   48438 main.go:143] libmachine: Using SSH client type: native
	I1212 19:49:06.766253   48438 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:49:06.766274   48438 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:49:06.919909   48438 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:49:06.919937   48438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:49:06.919964   48438 ubuntu.go:190] setting up certificates
	I1212 19:49:06.919986   48438 provision.go:84] configureAuth start
	I1212 19:49:06.920046   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:06.936937   48438 provision.go:143] copyHostCerts
	I1212 19:49:06.936980   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937022   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:49:06.937035   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:49:06.937107   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:49:06.937204   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937227   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:49:06.937232   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:49:06.937260   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:49:06.937320   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937341   48438 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:49:06.937354   48438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:49:06.937380   48438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:49:06.937435   48438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:49:07.142288   48438 provision.go:177] copyRemoteCerts
	I1212 19:49:07.142366   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:49:07.142409   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.158934   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.267886   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 19:49:07.267945   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:49:07.284419   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 19:49:07.284477   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:49:07.301465   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 19:49:07.301546   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:49:07.318717   48438 provision.go:87] duration metric: took 398.706755ms to configureAuth
	I1212 19:49:07.318790   48438 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:49:07.319006   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:07.319035   48438 machine.go:97] duration metric: took 921.650297ms to provisionDockerMachine
	I1212 19:49:07.319058   48438 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:49:07.319080   48438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:49:07.319173   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:49:07.319238   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.336520   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.439884   48438 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:49:07.443234   48438 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 19:49:07.443254   48438 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 19:49:07.443259   48438 command_runner.go:130] > VERSION_ID="12"
	I1212 19:49:07.443263   48438 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 19:49:07.443268   48438 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 19:49:07.443272   48438 command_runner.go:130] > ID=debian
	I1212 19:49:07.443276   48438 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 19:49:07.443281   48438 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 19:49:07.443289   48438 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 19:49:07.443341   48438 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:49:07.443361   48438 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:49:07.443371   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:49:07.443421   48438 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:49:07.443503   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:49:07.443510   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1212 19:49:07.443585   48438 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:49:07.443589   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1212 19:49:07.443629   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:49:07.450818   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:07.468474   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:49:07.485034   48438 start.go:296] duration metric: took 165.952143ms for postStartSetup
	I1212 19:49:07.485111   48438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:49:07.485180   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.502057   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.604226   48438 command_runner.go:130] > 12%
	I1212 19:49:07.604746   48438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:49:07.609551   48438 command_runner.go:130] > 172G
	I1212 19:49:07.609593   48438 fix.go:56] duration metric: took 1.231809331s for fixHost
	I1212 19:49:07.609604   48438 start.go:83] releasing machines lock for "functional-384006", held for 1.231841888s
	I1212 19:49:07.609687   48438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:49:07.626230   48438 ssh_runner.go:195] Run: cat /version.json
	I1212 19:49:07.626285   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.626592   48438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:49:07.626649   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:07.648515   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.651511   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:07.751468   48438 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 19:49:07.751688   48438 ssh_runner.go:195] Run: systemctl --version
	I1212 19:49:07.840262   48438 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 19:49:07.843071   48438 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 19:49:07.843106   48438 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 19:49:07.843235   48438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 19:49:07.847707   48438 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 19:49:07.847791   48438 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:49:07.847870   48438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:49:07.855348   48438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:49:07.855380   48438 start.go:496] detecting cgroup driver to use...
	I1212 19:49:07.855411   48438 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:49:07.855473   48438 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:49:07.872745   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:49:07.888438   48438 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:49:07.888499   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:49:07.905328   48438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:49:07.922378   48438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:49:08.040559   48438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:49:08.153632   48438 docker.go:234] disabling docker service ...
	I1212 19:49:08.153749   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:49:08.170255   48438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:49:08.183563   48438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:49:08.296935   48438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:49:08.413119   48438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:49:08.425880   48438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:49:08.438681   48438 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 19:49:08.439732   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:49:08.448541   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:49:08.457430   48438 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:49:08.457506   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:49:08.466099   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.474729   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:49:08.483278   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:49:08.491712   48438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:49:08.499807   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:49:08.508171   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:49:08.517078   48438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:49:08.525348   48438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:49:08.531636   48438 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 19:49:08.532621   48438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:49:08.539615   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:08.670670   48438 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:49:08.806796   48438 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:49:08.806894   48438 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:49:08.810696   48438 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 19:49:08.810773   48438 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 19:49:08.810802   48438 command_runner.go:130] > Device: 0,72	Inode: 1616        Links: 1
	I1212 19:49:08.810829   48438 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:08.810848   48438 command_runner.go:130] > Access: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810866   48438 command_runner.go:130] > Modify: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810881   48438 command_runner.go:130] > Change: 2025-12-12 19:49:08.757711126 +0000
	I1212 19:49:08.810904   48438 command_runner.go:130] >  Birth: -
	I1212 19:49:08.811086   48438 start.go:564] Will wait 60s for crictl version
	I1212 19:49:08.811174   48438 ssh_runner.go:195] Run: which crictl
	I1212 19:49:08.814485   48438 command_runner.go:130] > /usr/local/bin/crictl
	I1212 19:49:08.814611   48438 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:49:08.838884   48438 command_runner.go:130] > Version:  0.1.0
	I1212 19:49:08.838955   48438 command_runner.go:130] > RuntimeName:  containerd
	I1212 19:49:08.838976   48438 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 19:49:08.838997   48438 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 19:49:08.840776   48438 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:49:08.840864   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.863238   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.864954   48438 ssh_runner.go:195] Run: containerd --version
	I1212 19:49:08.884422   48438 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 19:49:08.891508   48438 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:49:08.894468   48438 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:49:08.910430   48438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:49:08.914297   48438 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 19:49:08.914409   48438 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:49:08.914505   48438 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:49:08.914560   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.938916   48438 command_runner.go:130] > {
	I1212 19:49:08.938935   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.938940   48438 command_runner.go:130] >     {
	I1212 19:49:08.938949   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.938953   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.938959   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.938962   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938967   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.938980   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.938983   48438 command_runner.go:130] >       ],
	I1212 19:49:08.938988   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.938991   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.938995   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.938998   48438 command_runner.go:130] >     },
	I1212 19:49:08.939001   48438 command_runner.go:130] >     {
	I1212 19:49:08.939009   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.939013   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939018   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.939022   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939034   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.939038   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.939049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939053   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939056   48438 command_runner.go:130] >     },
	I1212 19:49:08.939059   48438 command_runner.go:130] >     {
	I1212 19:49:08.939066   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.939069   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939075   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.939078   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939084   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939091   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.939095   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939100   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.939104   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.939108   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939111   48438 command_runner.go:130] >     },
	I1212 19:49:08.939115   48438 command_runner.go:130] >     {
	I1212 19:49:08.939121   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.939125   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939130   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.939133   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939137   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939154   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.939157   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939161   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.939166   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939170   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939173   48438 command_runner.go:130] >       },
	I1212 19:49:08.939177   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939181   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939184   48438 command_runner.go:130] >     },
	I1212 19:49:08.939187   48438 command_runner.go:130] >     {
	I1212 19:49:08.939193   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.939200   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939206   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.939209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939213   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939220   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.939224   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939228   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.939231   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939241   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939244   48438 command_runner.go:130] >       },
	I1212 19:49:08.939248   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939252   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939254   48438 command_runner.go:130] >     },
	I1212 19:49:08.939257   48438 command_runner.go:130] >     {
	I1212 19:49:08.939264   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.939268   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939273   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.939276   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939280   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939288   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.939291   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939295   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.939299   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939302   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939305   48438 command_runner.go:130] >       },
	I1212 19:49:08.939309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939316   48438 command_runner.go:130] >     },
	I1212 19:49:08.939319   48438 command_runner.go:130] >     {
	I1212 19:49:08.939326   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.939330   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939334   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.939338   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939345   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939353   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.939356   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939360   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.939364   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939368   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939370   48438 command_runner.go:130] >     },
	I1212 19:49:08.939375   48438 command_runner.go:130] >     {
	I1212 19:49:08.939381   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.939385   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939390   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.939393   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939397   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939405   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.939408   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939412   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.939416   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939420   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.939423   48438 command_runner.go:130] >       },
	I1212 19:49:08.939427   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939430   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.939433   48438 command_runner.go:130] >     },
	I1212 19:49:08.939437   48438 command_runner.go:130] >     {
	I1212 19:49:08.939443   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.939447   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.939452   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.939454   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939458   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.939465   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.939469   48438 command_runner.go:130] >       ],
	I1212 19:49:08.939473   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.939476   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.939480   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.939486   48438 command_runner.go:130] >       },
	I1212 19:49:08.939490   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.939493   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.939496   48438 command_runner.go:130] >     }
	I1212 19:49:08.939499   48438 command_runner.go:130] >   ]
	I1212 19:49:08.939502   48438 command_runner.go:130] > }
	I1212 19:49:08.940984   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.941004   48438 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:49:08.941060   48438 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:49:08.962883   48438 command_runner.go:130] > {
	I1212 19:49:08.962905   48438 command_runner.go:130] >   "images":  [
	I1212 19:49:08.962910   48438 command_runner.go:130] >     {
	I1212 19:49:08.962919   48438 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 19:49:08.962924   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.962930   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 19:49:08.962934   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962938   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.962948   48438 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 19:49:08.962955   48438 command_runner.go:130] >       ],
	I1212 19:49:08.962964   48438 command_runner.go:130] >       "size":  "40636774",
	I1212 19:49:08.962971   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.962975   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.962985   48438 command_runner.go:130] >     },
	I1212 19:49:08.962993   48438 command_runner.go:130] >     {
	I1212 19:49:08.963005   48438 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 19:49:08.963012   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963017   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 19:49:08.963021   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963026   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963035   48438 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 19:49:08.963040   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963045   48438 command_runner.go:130] >       "size":  "8034419",
	I1212 19:49:08.963049   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963055   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963058   48438 command_runner.go:130] >     },
	I1212 19:49:08.963064   48438 command_runner.go:130] >     {
	I1212 19:49:08.963071   48438 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 19:49:08.963081   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963086   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 19:49:08.963090   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963104   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963113   48438 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 19:49:08.963116   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963123   48438 command_runner.go:130] >       "size":  "21168808",
	I1212 19:49:08.963127   48438 command_runner.go:130] >       "username":  "nonroot",
	I1212 19:49:08.963132   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963137   48438 command_runner.go:130] >     },
	I1212 19:49:08.963146   48438 command_runner.go:130] >     {
	I1212 19:49:08.963157   48438 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 19:49:08.963170   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963175   48438 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 19:49:08.963178   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963187   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963198   48438 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 19:49:08.963201   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963210   48438 command_runner.go:130] >       "size":  "21136588",
	I1212 19:49:08.963214   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963221   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963224   48438 command_runner.go:130] >       },
	I1212 19:49:08.963228   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963234   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963238   48438 command_runner.go:130] >     },
	I1212 19:49:08.963241   48438 command_runner.go:130] >     {
	I1212 19:49:08.963248   48438 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 19:49:08.963255   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963260   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 19:49:08.963263   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963266   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963274   48438 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 19:49:08.963281   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963285   48438 command_runner.go:130] >       "size":  "24678359",
	I1212 19:49:08.963288   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963298   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963302   48438 command_runner.go:130] >       },
	I1212 19:49:08.963309   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963313   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963319   48438 command_runner.go:130] >     },
	I1212 19:49:08.963322   48438 command_runner.go:130] >     {
	I1212 19:49:08.963329   48438 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 19:49:08.963336   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963341   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 19:49:08.963344   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963348   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963356   48438 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 19:49:08.963363   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963367   48438 command_runner.go:130] >       "size":  "20661043",
	I1212 19:49:08.963370   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963374   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963382   48438 command_runner.go:130] >       },
	I1212 19:49:08.963389   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963393   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963396   48438 command_runner.go:130] >     },
	I1212 19:49:08.963399   48438 command_runner.go:130] >     {
	I1212 19:49:08.963406   48438 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 19:49:08.963413   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963418   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 19:49:08.963421   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963425   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963433   48438 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 19:49:08.963440   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963444   48438 command_runner.go:130] >       "size":  "22429671",
	I1212 19:49:08.963448   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963452   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963455   48438 command_runner.go:130] >     },
	I1212 19:49:08.963458   48438 command_runner.go:130] >     {
	I1212 19:49:08.963465   48438 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 19:49:08.963472   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963478   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 19:49:08.963483   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963487   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963498   48438 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 19:49:08.963503   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963509   48438 command_runner.go:130] >       "size":  "15391364",
	I1212 19:49:08.963515   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963518   48438 command_runner.go:130] >         "value":  "0"
	I1212 19:49:08.963521   48438 command_runner.go:130] >       },
	I1212 19:49:08.963525   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963529   48438 command_runner.go:130] >       "pinned":  false
	I1212 19:49:08.963534   48438 command_runner.go:130] >     },
	I1212 19:49:08.963537   48438 command_runner.go:130] >     {
	I1212 19:49:08.963547   48438 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 19:49:08.963555   48438 command_runner.go:130] >       "repoTags":  [
	I1212 19:49:08.963560   48438 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 19:49:08.963566   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963570   48438 command_runner.go:130] >       "repoDigests":  [
	I1212 19:49:08.963580   48438 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 19:49:08.963587   48438 command_runner.go:130] >       ],
	I1212 19:49:08.963591   48438 command_runner.go:130] >       "size":  "267939",
	I1212 19:49:08.963594   48438 command_runner.go:130] >       "uid":  {
	I1212 19:49:08.963598   48438 command_runner.go:130] >         "value":  "65535"
	I1212 19:49:08.963604   48438 command_runner.go:130] >       },
	I1212 19:49:08.963611   48438 command_runner.go:130] >       "username":  "",
	I1212 19:49:08.963615   48438 command_runner.go:130] >       "pinned":  true
	I1212 19:49:08.963618   48438 command_runner.go:130] >     }
	I1212 19:49:08.963621   48438 command_runner.go:130] >   ]
	I1212 19:49:08.963624   48438 command_runner.go:130] > }
	I1212 19:49:08.965735   48438 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:49:08.965756   48438 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:49:08.965764   48438 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:49:08.965868   48438 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:49:08.965936   48438 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:49:08.990907   48438 command_runner.go:130] > {
	I1212 19:49:08.990927   48438 command_runner.go:130] >   "cniconfig": {
	I1212 19:49:08.990932   48438 command_runner.go:130] >     "Networks": [
	I1212 19:49:08.990936   48438 command_runner.go:130] >       {
	I1212 19:49:08.990942   48438 command_runner.go:130] >         "Config": {
	I1212 19:49:08.990947   48438 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 19:49:08.990980   48438 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 19:49:08.990997   48438 command_runner.go:130] >           "Plugins": [
	I1212 19:49:08.991002   48438 command_runner.go:130] >             {
	I1212 19:49:08.991010   48438 command_runner.go:130] >               "Network": {
	I1212 19:49:08.991014   48438 command_runner.go:130] >                 "ipam": {},
	I1212 19:49:08.991020   48438 command_runner.go:130] >                 "type": "loopback"
	I1212 19:49:08.991023   48438 command_runner.go:130] >               },
	I1212 19:49:08.991033   48438 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 19:49:08.991041   48438 command_runner.go:130] >             }
	I1212 19:49:08.991063   48438 command_runner.go:130] >           ],
	I1212 19:49:08.991073   48438 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 19:49:08.991080   48438 command_runner.go:130] >         },
	I1212 19:49:08.991089   48438 command_runner.go:130] >         "IFName": "lo"
	I1212 19:49:08.991095   48438 command_runner.go:130] >       }
	I1212 19:49:08.991098   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991103   48438 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 19:49:08.991106   48438 command_runner.go:130] >     "PluginDirs": [
	I1212 19:49:08.991109   48438 command_runner.go:130] >       "/opt/cni/bin"
	I1212 19:49:08.991113   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991117   48438 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 19:49:08.991135   48438 command_runner.go:130] >     "Prefix": "eth"
	I1212 19:49:08.991151   48438 command_runner.go:130] >   },
	I1212 19:49:08.991154   48438 command_runner.go:130] >   "config": {
	I1212 19:49:08.991158   48438 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 19:49:08.991171   48438 command_runner.go:130] >       "/etc/cdi",
	I1212 19:49:08.991184   48438 command_runner.go:130] >       "/var/run/cdi"
	I1212 19:49:08.991188   48438 command_runner.go:130] >     ],
	I1212 19:49:08.991191   48438 command_runner.go:130] >     "cni": {
	I1212 19:49:08.991195   48438 command_runner.go:130] >       "binDir": "",
	I1212 19:49:08.991202   48438 command_runner.go:130] >       "binDirs": [
	I1212 19:49:08.991206   48438 command_runner.go:130] >         "/opt/cni/bin"
	I1212 19:49:08.991209   48438 command_runner.go:130] >       ],
	I1212 19:49:08.991216   48438 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 19:49:08.991220   48438 command_runner.go:130] >       "confTemplate": "",
	I1212 19:49:08.991224   48438 command_runner.go:130] >       "ipPref": "",
	I1212 19:49:08.991227   48438 command_runner.go:130] >       "maxConfNum": 1,
	I1212 19:49:08.991231   48438 command_runner.go:130] >       "setupSerially": false,
	I1212 19:49:08.991235   48438 command_runner.go:130] >       "useInternalLoopback": false
	I1212 19:49:08.991248   48438 command_runner.go:130] >     },
	I1212 19:49:08.991264   48438 command_runner.go:130] >     "containerd": {
	I1212 19:49:08.991273   48438 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 19:49:08.991288   48438 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 19:49:08.991302   48438 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 19:49:08.991311   48438 command_runner.go:130] >       "runtimes": {
	I1212 19:49:08.991317   48438 command_runner.go:130] >         "runc": {
	I1212 19:49:08.991321   48438 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 19:49:08.991325   48438 command_runner.go:130] >           "PodAnnotations": null,
	I1212 19:49:08.991329   48438 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 19:49:08.991340   48438 command_runner.go:130] >           "cgroupWritable": false,
	I1212 19:49:08.991344   48438 command_runner.go:130] >           "cniConfDir": "",
	I1212 19:49:08.991347   48438 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 19:49:08.991351   48438 command_runner.go:130] >           "io_type": "",
	I1212 19:49:08.991366   48438 command_runner.go:130] >           "options": {
	I1212 19:49:08.991378   48438 command_runner.go:130] >             "BinaryName": "",
	I1212 19:49:08.991382   48438 command_runner.go:130] >             "CriuImagePath": "",
	I1212 19:49:08.991386   48438 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 19:49:08.991400   48438 command_runner.go:130] >             "IoGid": 0,
	I1212 19:49:08.991410   48438 command_runner.go:130] >             "IoUid": 0,
	I1212 19:49:08.991414   48438 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 19:49:08.991418   48438 command_runner.go:130] >             "Root": "",
	I1212 19:49:08.991422   48438 command_runner.go:130] >             "ShimCgroup": "",
	I1212 19:49:08.991427   48438 command_runner.go:130] >             "SystemdCgroup": false
	I1212 19:49:08.991433   48438 command_runner.go:130] >           },
	I1212 19:49:08.991439   48438 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 19:49:08.991455   48438 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 19:49:08.991461   48438 command_runner.go:130] >           "runtimePath": "",
	I1212 19:49:08.991476   48438 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 19:49:08.991487   48438 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 19:49:08.991491   48438 command_runner.go:130] >           "snapshotter": ""
	I1212 19:49:08.991503   48438 command_runner.go:130] >         }
	I1212 19:49:08.991510   48438 command_runner.go:130] >       }
	I1212 19:49:08.991513   48438 command_runner.go:130] >     },
	I1212 19:49:08.991525   48438 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 19:49:08.991540   48438 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 19:49:08.991547   48438 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 19:49:08.991554   48438 command_runner.go:130] >     "disableApparmor": false,
	I1212 19:49:08.991559   48438 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 19:49:08.991564   48438 command_runner.go:130] >     "disableProcMount": false,
	I1212 19:49:08.991583   48438 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 19:49:08.991588   48438 command_runner.go:130] >     "enableCDI": true,
	I1212 19:49:08.991603   48438 command_runner.go:130] >     "enableSelinux": false,
	I1212 19:49:08.991616   48438 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 19:49:08.991621   48438 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 19:49:08.991627   48438 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 19:49:08.991634   48438 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 19:49:08.991639   48438 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 19:49:08.991643   48438 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 19:49:08.991653   48438 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 19:49:08.991658   48438 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991662   48438 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 19:49:08.991678   48438 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 19:49:08.991689   48438 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 19:49:08.991694   48438 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 19:49:08.991696   48438 command_runner.go:130] >   },
	I1212 19:49:08.991700   48438 command_runner.go:130] >   "features": {
	I1212 19:49:08.991704   48438 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 19:49:08.991706   48438 command_runner.go:130] >   },
	I1212 19:49:08.991710   48438 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 19:49:08.991719   48438 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991728   48438 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 19:49:08.991732   48438 command_runner.go:130] >   "runtimeHandlers": [
	I1212 19:49:08.991735   48438 command_runner.go:130] >     {
	I1212 19:49:08.991739   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991743   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991747   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991751   48438 command_runner.go:130] >       }
	I1212 19:49:08.991759   48438 command_runner.go:130] >     },
	I1212 19:49:08.991762   48438 command_runner.go:130] >     {
	I1212 19:49:08.991766   48438 command_runner.go:130] >       "features": {
	I1212 19:49:08.991770   48438 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 19:49:08.991774   48438 command_runner.go:130] >         "user_namespaces": true
	I1212 19:49:08.991796   48438 command_runner.go:130] >       },
	I1212 19:49:08.991800   48438 command_runner.go:130] >       "name": "runc"
	I1212 19:49:08.991803   48438 command_runner.go:130] >     }
	I1212 19:49:08.991807   48438 command_runner.go:130] >   ],
	I1212 19:49:08.991875   48438 command_runner.go:130] >   "status": {
	I1212 19:49:08.991889   48438 command_runner.go:130] >     "conditions": [
	I1212 19:49:08.991892   48438 command_runner.go:130] >       {
	I1212 19:49:08.991895   48438 command_runner.go:130] >         "message": "",
	I1212 19:49:08.991899   48438 command_runner.go:130] >         "reason": "",
	I1212 19:49:08.991904   48438 command_runner.go:130] >         "status": true,
	I1212 19:49:08.991918   48438 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 19:49:08.991921   48438 command_runner.go:130] >       },
	I1212 19:49:08.991925   48438 command_runner.go:130] >       {
	I1212 19:49:08.991939   48438 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 19:49:08.991955   48438 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 19:49:08.991963   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.991967   48438 command_runner.go:130] >         "type": "NetworkReady"
	I1212 19:49:08.991970   48438 command_runner.go:130] >       },
	I1212 19:49:08.991989   48438 command_runner.go:130] >       {
	I1212 19:49:08.992014   48438 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 19:49:08.992028   48438 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 19:49:08.992037   48438 command_runner.go:130] >         "status": false,
	I1212 19:49:08.992042   48438 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 19:49:08.992045   48438 command_runner.go:130] >       }
	I1212 19:49:08.992058   48438 command_runner.go:130] >     ]
	I1212 19:49:08.992068   48438 command_runner.go:130] >   }
	I1212 19:49:08.992071   48438 command_runner.go:130] > }
	I1212 19:49:08.994409   48438 cni.go:84] Creating CNI manager for ""
	I1212 19:49:08.994432   48438 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:49:08.994453   48438 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:49:08.994474   48438 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:49:08.994579   48438 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:49:08.994644   48438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:49:09.001254   48438 command_runner.go:130] > kubeadm
	I1212 19:49:09.001273   48438 command_runner.go:130] > kubectl
	I1212 19:49:09.001277   48438 command_runner.go:130] > kubelet
	I1212 19:49:09.002097   48438 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:49:09.002172   48438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:49:09.009620   48438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:49:09.025282   48438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:49:09.038423   48438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 19:49:09.054506   48438 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:49:09.058001   48438 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 19:49:09.058066   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:09.175064   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:09.445347   48438 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:49:09.445426   48438 certs.go:195] generating shared ca certs ...
	I1212 19:49:09.445484   48438 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:09.445704   48438 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:49:09.445799   48438 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:49:09.445839   48438 certs.go:257] generating profile certs ...
	I1212 19:49:09.446025   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:49:09.446164   48438 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:49:09.446275   48438 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:49:09.446313   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 19:49:09.446386   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 19:49:09.446438   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 19:49:09.446492   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 19:49:09.446544   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 19:49:09.446605   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 19:49:09.446663   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 19:49:09.446721   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 19:49:09.446856   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:49:09.446943   48438 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:49:09.447016   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:49:09.447074   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:49:09.447157   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:49:09.447233   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:49:09.447516   48438 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:49:09.447598   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.447652   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.447686   48438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.448483   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:49:09.470612   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:49:09.491665   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:49:09.514138   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:49:09.535795   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:49:09.552964   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:49:09.570164   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:49:09.587343   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:49:09.604384   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:49:09.621471   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:49:09.638910   48438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:49:09.656615   48438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:49:09.669235   48438 ssh_runner.go:195] Run: openssl version
	I1212 19:49:09.674787   48438 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 19:49:09.675343   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.682988   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:49:09.690425   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.693996   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694309   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.694370   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:49:09.734801   48438 command_runner.go:130] > 3ec20f2e
	I1212 19:49:09.735274   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:49:09.742485   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.749966   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:49:09.757755   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761677   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761712   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.761771   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:49:09.803349   48438 command_runner.go:130] > b5213941
	I1212 19:49:09.803809   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:49:09.811062   48438 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.818242   48438 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:49:09.825568   48438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829043   48438 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829382   48438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.829462   48438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:49:09.872087   48438 command_runner.go:130] > 51391683
	I1212 19:49:09.872525   48438 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:49:09.879635   48438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883004   48438 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:49:09.883053   48438 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 19:49:09.883072   48438 command_runner.go:130] > Device: 259,1	Inode: 1317518     Links: 1
	I1212 19:49:09.883079   48438 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 19:49:09.883085   48438 command_runner.go:130] > Access: 2025-12-12 19:45:02.427863285 +0000
	I1212 19:49:09.883090   48438 command_runner.go:130] > Modify: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883095   48438 command_runner.go:130] > Change: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883100   48438 command_runner.go:130] >  Birth: 2025-12-12 19:40:58.462325249 +0000
	I1212 19:49:09.883177   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:49:09.925331   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.925758   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:49:09.966336   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:09.966825   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:49:10.007601   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.008047   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:49:10.052009   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.052500   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:49:10.094223   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.094385   48438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:49:10.136742   48438 command_runner.go:130] > Certificate will not expire
	I1212 19:49:10.136814   48438 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:49:10.136904   48438 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:49:10.136973   48438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:49:10.167070   48438 cri.go:89] found id: ""
	I1212 19:49:10.167141   48438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:49:10.174626   48438 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 19:49:10.174649   48438 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 19:49:10.174663   48438 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 19:49:10.175405   48438 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:49:10.175423   48438 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:49:10.175476   48438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:49:10.183010   48438 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:49:10.183461   48438 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.183602   48438 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384006" cluster setting kubeconfig missing "functional-384006" context setting]
	I1212 19:49:10.183992   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.184411   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.184572   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.185056   48438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 19:49:10.185097   48438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 19:49:10.185107   48438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 19:49:10.185113   48438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 19:49:10.185120   48438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 19:49:10.185448   48438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:49:10.185546   48438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 19:49:10.194572   48438 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 19:49:10.194610   48438 kubeadm.go:602] duration metric: took 19.175488ms to restartPrimaryControlPlane
	I1212 19:49:10.194619   48438 kubeadm.go:403] duration metric: took 57.811789ms to StartCluster
	I1212 19:49:10.194633   48438 settings.go:142] acquiring lock: {Name:mk405cd0853bb1c41336dcaeeb8fe9a56ff7ca00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.194694   48438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.195302   48438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:49:10.195505   48438 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 19:49:10.195860   48438 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:49:10.195913   48438 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 19:49:10.195982   48438 addons.go:70] Setting storage-provisioner=true in profile "functional-384006"
	I1212 19:49:10.195999   48438 addons.go:239] Setting addon storage-provisioner=true in "functional-384006"
	I1212 19:49:10.196020   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.196498   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.197078   48438 addons.go:70] Setting default-storageclass=true in profile "functional-384006"
	I1212 19:49:10.197104   48438 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384006"
	I1212 19:49:10.197385   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.200737   48438 out.go:179] * Verifying Kubernetes components...
	I1212 19:49:10.203657   48438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:49:10.242694   48438 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:49:10.242850   48438 kapi.go:59] client config for functional-384006: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 19:49:10.243167   48438 addons.go:239] Setting addon default-storageclass=true in "functional-384006"
	I1212 19:49:10.243197   48438 host.go:66] Checking if "functional-384006" exists ...
	I1212 19:49:10.243613   48438 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:49:10.244264   48438 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:49:10.248400   48438 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.248422   48438 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:49:10.248484   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.280006   48438 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:10.280027   48438 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:49:10.280091   48438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:49:10.292135   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.320079   48438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:49:10.410663   48438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:49:10.453525   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:10.485844   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.196335   48438 node_ready.go:35] waiting up to 6m0s for node "functional-384006" to be "Ready" ...
	I1212 19:49:11.196458   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.196510   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.196726   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196748   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196769   48438 retry.go:31] will retry after 366.342967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196806   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.196817   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196823   48438 retry.go:31] will retry after 300.335318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.196876   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:11.497399   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:11.554914   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.558623   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.558688   48438 retry.go:31] will retry after 444.117502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.563799   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:11.619827   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:11.623191   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.623218   48438 retry.go:31] will retry after 549.294372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:11.698171   48438 type.go:168] "Request Body" body=""
	I1212 19:49:11.698248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:11.698564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.003014   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.062616   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.066362   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.066391   48438 retry.go:31] will retry after 595.188251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.173715   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.197395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.233993   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.234039   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.234058   48438 retry.go:31] will retry after 392.030002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.626804   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:12.662348   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:12.696816   48438 type.go:168] "Request Body" body=""
	I1212 19:49:12.696944   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:12.697262   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:12.708549   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.715333   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.715413   48438 retry.go:31] will retry after 1.207907286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756481   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:12.756580   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:12.756630   48438 retry.go:31] will retry after 988.700176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.197091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.197179   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:13.197567   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:13.697358   48438 type.go:168] "Request Body" body=""
	I1212 19:49:13.697464   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:13.697803   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:13.746091   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:13.800035   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.803463   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.803491   48438 retry.go:31] will retry after 829.308427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.923746   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:13.982211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:13.982249   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:13.982267   48438 retry.go:31] will retry after 769.179652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.196516   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.196587   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.196865   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.633627   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:14.690489   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.693763   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.693798   48438 retry.go:31] will retry after 2.844765229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.697018   48438 type.go:168] "Request Body" body=""
	I1212 19:49:14.697087   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:14.697405   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:14.752598   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:14.810008   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:14.810058   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:14.810075   48438 retry.go:31] will retry after 1.702576008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:15.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.196581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:15.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:49:15.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:15.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:15.697028   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:16.196951   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.197024   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.197313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:16.513895   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:16.577782   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:16.577823   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.577842   48438 retry.go:31] will retry after 3.833463827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:16.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:16.697311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:16.697616   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.197116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:17.538823   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:17.596746   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:17.600222   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.600249   48438 retry.go:31] will retry after 2.11378985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:17.696505   48438 type.go:168] "Request Body" body=""
	I1212 19:49:17.696573   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:17.696885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:18.196556   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:18.197023   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:18.696638   48438 type.go:168] "Request Body" body=""
	I1212 19:49:18.696729   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:18.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.196736   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.196812   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.696622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:19.696700   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:19.696961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:19.714208   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:19.768038   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:19.771528   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:19.771557   48438 retry.go:31] will retry after 5.800996246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.197387   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.197458   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:20.197788   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:20.412247   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:20.466933   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:20.470625   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.470653   48438 retry.go:31] will retry after 5.197371043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:20.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:49:20.697099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:20.697410   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.197198   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.197271   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.197569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:21.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:49:21.697116   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:21.697371   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.197585   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:22.697243   48438 type.go:168] "Request Body" body=""
	I1212 19:49:22.697314   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:22.697647   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:22.697696   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:23.197042   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.197134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:23.697049   48438 type.go:168] "Request Body" body=""
	I1212 19:49:23.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:23.697429   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.197196   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.197268   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:24.697001   48438 type.go:168] "Request Body" body=""
	I1212 19:49:24.697067   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:24.697318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.196674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.197011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:25.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:25.573546   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:25.640105   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.640150   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.640168   48438 retry.go:31] will retry after 9.327300318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.668309   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:25.696826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:25.696923   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:25.697181   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:25.735314   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:25.738857   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:25.738887   48438 retry.go:31] will retry after 6.705148998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:26.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.197240   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.197490   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:26.697309   48438 type.go:168] "Request Body" body=""
	I1212 19:49:26.697408   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:26.697729   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:27.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.197584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.197871   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:27.197919   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:27.696575   48438 type.go:168] "Request Body" body=""
	I1212 19:49:27.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:27.696952   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.196680   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.196762   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.197103   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:28.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:28.696675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:28.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.196525   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.196926   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:29.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:29.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:29.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:29.697067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:30.197085   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.197181   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.197519   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:30.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:49:30.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:30.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.197223   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.197295   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.197605   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:31.697429   48438 type.go:168] "Request Body" body=""
	I1212 19:49:31.697504   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:31.697832   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:31.697883   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:32.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:32.444273   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:32.498733   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:32.502453   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.502484   48438 retry.go:31] will retry after 9.024395099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:32.696884   48438 type.go:168] "Request Body" body=""
	I1212 19:49:32.696967   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:32.697298   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:33.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:33.696606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:33.696862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.196944   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:34.196991   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:34.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:34.696625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:34.696943   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:34.968441   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:35.030670   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:35.034703   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.034735   48438 retry.go:31] will retry after 11.456350697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:35.196975   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.197325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:35.697091   48438 type.go:168] "Request Body" body=""
	I1212 19:49:35.697164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:35.697483   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:36.197206   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.197280   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.197576   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:36.197625   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:49:36.697108   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:36.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.197157   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.197231   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.197556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:37.697344   48438 type.go:168] "Request Body" body=""
	I1212 19:49:37.697421   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:37.697737   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.197048   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.197120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.197393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:38.697237   48438 type.go:168] "Request Body" body=""
	I1212 19:49:38.697313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:38.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:38.697751   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:39.197495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.197574   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.197923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:39.696600   48438 type.go:168] "Request Body" body=""
	I1212 19:49:39.696663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:39.696902   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:40.696978   48438 type.go:168] "Request Body" body=""
	I1212 19:49:40.697049   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:40.697369   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:41.197258   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.197327   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.197601   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:41.197683   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:41.527120   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:41.586633   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:41.590403   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.590436   48438 retry.go:31] will retry after 11.748431511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:41.696875   48438 type.go:168] "Request Body" body=""
	I1212 19:49:41.696951   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:41.697272   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.196642   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.196731   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:42.696550   48438 type.go:168] "Request Body" body=""
	I1212 19:49:42.696647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:42.696923   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.196548   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:43.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:49:43.696721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:43.697043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:43.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:44.196771   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.196840   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.197104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:44.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:49:44.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:44.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.196928   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.197005   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.197335   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:45.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:45.696632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:45.696941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:46.196940   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.197010   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.197309   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:46.197362   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:46.491755   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:46.549211   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:46.549254   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.549272   48438 retry.go:31] will retry after 7.577859466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:46.697552   48438 type.go:168] "Request Body" body=""
	I1212 19:49:46.697629   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:46.697924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.196597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.196927   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:47.696631   48438 type.go:168] "Request Body" body=""
	I1212 19:49:47.696710   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:47.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.197015   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:48.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:49:48.696726   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:48.697050   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:48.697099   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:49.196624   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.197019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:49.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:49:49.696695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:49.697125   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.197269   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.197350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.197608   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:50.697495   48438 type.go:168] "Request Body" body=""
	I1212 19:49:50.697567   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:50.697901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:50.697955   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:51.196732   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.196803   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:51.696762   48438 type.go:168] "Request Body" body=""
	I1212 19:49:51.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:51.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.196599   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.196971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:49:52.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:52.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:53.196528   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.196891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:53.196934   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:53.339331   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:49:53.394698   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:53.398291   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.398322   48438 retry.go:31] will retry after 25.381584091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:53.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:49:53.696686   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:53.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.127648   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:49:54.185700   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:49:54.185751   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.185771   48438 retry.go:31] will retry after 18.076319981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:49:54.196871   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.196963   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.197226   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:54.696517   48438 type.go:168] "Request Body" body=""
	I1212 19:49:54.696579   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:54.696863   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:55.196622   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.196982   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:55.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:55.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:49:55.696691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:55.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.196994   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.197059   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:56.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:49:56.697233   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:56.697537   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:57.197290   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.197368   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.197681   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:57.197733   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:49:57.697000   48438 type.go:168] "Request Body" body=""
	I1212 19:49:57.697069   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:57.697304   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.196651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.196993   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:58.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:49:58.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:58.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:49:59.696921   48438 type.go:168] "Request Body" body=""
	I1212 19:49:59.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:49:59.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:49:59.697380   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:00.197384   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.197775   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:00.696649   48438 type.go:168] "Request Body" body=""
	I1212 19:50:00.696725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:00.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.197059   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.197145   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:01.697372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:01.697463   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:01.697881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:01.697942   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:02.196547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:02.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:02.696670   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:02.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.196781   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.197108   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:03.696791   48438 type.go:168] "Request Body" body=""
	I1212 19:50:03.696860   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:03.697174   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:04.196836   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.196908   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:04.197301   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:04.696812   48438 type.go:168] "Request Body" body=""
	I1212 19:50:04.696891   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:04.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.196827   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.196904   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:05.696566   48438 type.go:168] "Request Body" body=""
	I1212 19:50:05.696635   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:05.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:06.196948   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.197026   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.197368   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:06.197422   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:06.697024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:06.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:06.697393   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.197202   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.197278   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.197614   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:07.697404   48438 type.go:168] "Request Body" body=""
	I1212 19:50:07.697475   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:07.697790   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.196467   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.196533   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.196831   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:08.696513   48438 type.go:168] "Request Body" body=""
	I1212 19:50:08.696584   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:08.696925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:08.696997   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:09.196531   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.196606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.196936   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:09.696629   48438 type.go:168] "Request Body" body=""
	I1212 19:50:09.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:09.696947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.197069   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.197157   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:10.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:50:10.697420   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:10.697769   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:10.697839   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:11.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.197258   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:11.697392   48438 type.go:168] "Request Body" body=""
	I1212 19:50:11.697467   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:11.697811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.197401   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.197473   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.197766   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:12.263038   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:12.317640   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:12.321089   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.321118   48438 retry.go:31] will retry after 33.331276854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:12.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:50:12.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:12.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:13.196651   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:13.197046   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:13.696602   48438 type.go:168] "Request Body" body=""
	I1212 19:50:13.696674   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:13.696975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.196564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:14.696632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:14.696719   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:14.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:15.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.196715   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:15.197085   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:15.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:50:15.696791   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:15.697104   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.197135   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.197570   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:16.697411   48438 type.go:168] "Request Body" body=""
	I1212 19:50:16.697489   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:16.697833   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.196534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.196602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.196867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:17.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:17.696709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:17.697053   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:17.697120   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:18.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.196724   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:50:18.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:18.696950   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:18.780412   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:18.840261   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:18.840307   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:18.840327   48438 retry.go:31] will retry after 31.549397312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 19:50:19.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.196999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:19.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:50:19.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:20.197071   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.197171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.197499   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:20.197554   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:20.697293   48438 type.go:168] "Request Body" body=""
	I1212 19:50:20.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:20.697711   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.197239   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.197699   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:21.697033   48438 type.go:168] "Request Body" body=""
	I1212 19:50:21.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:21.697463   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:22.197573   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.197648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.197961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:22.198017   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:22.696673   48438 type.go:168] "Request Body" body=""
	I1212 19:50:22.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:22.697109   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.196692   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.196763   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.197088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:23.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:50:23.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:23.697041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.196735   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.197141   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:24.696553   48438 type.go:168] "Request Body" body=""
	I1212 19:50:24.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:24.696913   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:24.696962   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:25.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:25.696594   48438 type.go:168] "Request Body" body=""
	I1212 19:50:25.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:25.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.196805   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.196888   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.197147   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:26.696608   48438 type.go:168] "Request Body" body=""
	I1212 19:50:26.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:26.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:26.697078   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:27.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.196705   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.197036   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:27.696717   48438 type.go:168] "Request Body" body=""
	I1212 19:50:27.696786   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:27.697091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.196811   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.196880   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.197204   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:28.696604   48438 type.go:168] "Request Body" body=""
	I1212 19:50:28.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:28.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:28.697101   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:29.196569   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:29.696596   48438 type.go:168] "Request Body" body=""
	I1212 19:50:29.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:29.697016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.196809   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.196906   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.197224   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:30.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:50:30.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:30.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:31.196990   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.197061   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.197407   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:31.197465   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:31.697274   48438 type.go:168] "Request Body" body=""
	I1212 19:50:31.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:31.697677   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.197397   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:32.697173   48438 type.go:168] "Request Body" body=""
	I1212 19:50:32.697264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:32.697607   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:33.197434   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.197509   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.197848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:33.197901   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:33.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:50:33.696597   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:33.696851   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:34.696533   48438 type.go:168] "Request Body" body=""
	I1212 19:50:34.696627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:34.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.196543   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:35.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:35.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:35.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:35.697050   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:36.197040   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.197129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.197456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:36.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:50:36.697338   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:36.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.197319   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.197399   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.197705   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:37.697534   48438 type.go:168] "Request Body" body=""
	I1212 19:50:37.697606   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:37.697891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:37.697935   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:38.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.196697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.197041   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:38.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:50:38.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:38.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.196632   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.196728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.197038   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:39.696547   48438 type.go:168] "Request Body" body=""
	I1212 19:50:39.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:39.696879   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:40.197486   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.197559   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.197900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:40.197971   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:40.696503   48438 type.go:168] "Request Body" body=""
	I1212 19:50:40.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:40.696917   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.196679   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.196745   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.196986   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:41.696662   48438 type.go:168] "Request Body" body=""
	I1212 19:50:41.696734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:41.697088   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.196929   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.197017   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:42.697020   48438 type.go:168] "Request Body" body=""
	I1212 19:50:42.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:42.697350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:42.697390   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:43.197172   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.197578   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:43.697431   48438 type.go:168] "Request Body" body=""
	I1212 19:50:43.697521   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:43.697836   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.196518   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.196586   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.196857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:44.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:50:44.696646   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:44.697013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.196875   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.196959   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.197384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:45.197450   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:45.653170   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:50:45.696473   48438 type.go:168] "Request Body" body=""
	I1212 19:50:45.696544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:45.696768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:45.722043   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722078   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:45.722170   48438 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:46.197149   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.197524   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:46.697218   48438 type.go:168] "Request Body" body=""
	I1212 19:50:46.697285   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:46.697603   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.197403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:47.697075   48438 type.go:168] "Request Body" body=""
	I1212 19:50:47.697158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:47.697475   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:47.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:48.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.197195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.197571   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:48.697105   48438 type.go:168] "Request Body" body=""
	I1212 19:50:48.697174   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:48.697455   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.197123   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.197523   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:49.697198   48438 type.go:168] "Request Body" body=""
	I1212 19:50:49.697276   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:49.697615   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:49.697669   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:50.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.197708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:50.390183   48438 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:50:50.447451   48438 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447486   48438 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 19:50:50.447560   48438 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 19:50:50.450729   48438 out.go:179] * Enabled addons: 
	I1212 19:50:50.452858   48438 addons.go:530] duration metric: took 1m40.25694205s for enable addons: enabled=[]
	I1212 19:50:50.697432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:50.697527   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:50.697885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.196816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.197159   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:51.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:50:51.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:51.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:52.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:52.197004   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:52.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:50:52.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:52.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.196675   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.196744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:53.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:50:53.696741   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:53.697070   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:54.196757   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.196826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.197113   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:54.197157   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:54.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:50:54.696641   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:54.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.196708   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.197136   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:55.696829   48438 type.go:168] "Request Body" body=""
	I1212 19:50:55.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:55.697229   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:56.197066   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.197131   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.197387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:56.197429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:56.697219   48438 type.go:168] "Request Body" body=""
	I1212 19:50:56.697315   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:56.697648   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.197432   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.197513   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.197815   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:57.696494   48438 type.go:168] "Request Body" body=""
	I1212 19:50:57.696561   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:57.696813   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.196619   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.196701   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.197024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:58.696727   48438 type.go:168] "Request Body" body=""
	I1212 19:50:58.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:58.697094   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:50:58.697138   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:50:59.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.196633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.196941   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:50:59.696655   48438 type.go:168] "Request Body" body=""
	I1212 19:50:59.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:50:59.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.197073   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:00.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:51:00.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:00.697403   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:00.697447   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:01.197252   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.197345   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.197675   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:01.697471   48438 type.go:168] "Request Body" body=""
	I1212 19:51:01.697549   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:01.697859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.196610   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:02.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:02.696665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:02.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:03.196694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.196766   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.197077   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:03.197130   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:03.696767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:03.696834   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:03.697143   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.196630   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.196704   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:04.696694   48438 type.go:168] "Request Body" body=""
	I1212 19:51:04.696764   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:04.697055   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.196712   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.196795   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:05.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:05.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:05.696994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:05.697052   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:06.197016   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.197103   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.197772   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:06.696478   48438 type.go:168] "Request Body" body=""
	I1212 19:51:06.696543   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:06.696795   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.197509   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.197581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.197882   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:07.696523   48438 type.go:168] "Request Body" body=""
	I1212 19:51:07.696601   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:07.696891   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:08.197181   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.197247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:08.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:08.697328   48438 type.go:168] "Request Body" body=""
	I1212 19:51:08.697400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:08.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.196841   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.196931   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:09.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:51:09.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:09.697005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:10.197489   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.197571   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.197956   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:10.198032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:10.696692   48438 type.go:168] "Request Body" body=""
	I1212 19:51:10.696765   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:10.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.197068   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.197318   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:11.697124   48438 type.go:168] "Request Body" body=""
	I1212 19:51:11.697195   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:11.697510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.197311   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.197738   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:12.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:51:12.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:12.697351   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:12.697398   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:13.197118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.197189   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.197491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:13.697316   48438 type.go:168] "Request Body" body=""
	I1212 19:51:13.697395   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:13.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.197024   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.197091   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.197349   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:14.697131   48438 type.go:168] "Request Body" body=""
	I1212 19:51:14.697203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:14.697525   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:14.697582   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:15.197372   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.197446   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.197768   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:15.697037   48438 type.go:168] "Request Body" body=""
	I1212 19:51:15.697105   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:15.697362   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.197215   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.197294   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:16.697439   48438 type.go:168] "Request Body" body=""
	I1212 19:51:16.697512   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:16.697826   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:16.697889   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:17.196515   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.196583   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.196839   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:17.696542   48438 type.go:168] "Request Body" body=""
	I1212 19:51:17.696615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:17.696920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.196616   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.196690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.197045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:18.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:18.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:18.696955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:19.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.196953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:19.197013   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:19.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:51:19.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:19.697021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.196767   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.196839   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:20.696858   48438 type.go:168] "Request Body" body=""
	I1212 19:51:20.696961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:20.697324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:21.197132   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.197203   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.197518   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:21.197569   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:21.697039   48438 type.go:168] "Request Body" body=""
	I1212 19:51:21.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:21.697448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.197274   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.197346   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.197691   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:22.697485   48438 type.go:168] "Request Body" body=""
	I1212 19:51:22.697564   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:22.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.196561   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.196694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:23.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:23.696703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:23.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:23.697041   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:24.196710   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.196779   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.197091   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:24.696706   48438 type.go:168] "Request Body" body=""
	I1212 19:51:24.696797   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:24.697093   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.196644   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.196722   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.197069   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:25.696784   48438 type.go:168] "Request Body" body=""
	I1212 19:51:25.696867   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:25.697150   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:25.697198   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:26.197035   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.197360   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:26.697146   48438 type.go:168] "Request Body" body=""
	I1212 19:51:26.697218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:26.697508   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.197323   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.197404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.197694   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:27.697053   48438 type.go:168] "Request Body" body=""
	I1212 19:51:27.697134   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:27.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:27.697428   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:28.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.197600   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:28.697377   48438 type.go:168] "Request Body" body=""
	I1212 19:51:28.697453   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:28.697770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.197025   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.197094   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.197350   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:29.697086   48438 type.go:168] "Request Body" body=""
	I1212 19:51:29.697156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:29.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:29.697528   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:30.197322   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.197752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:30.697118   48438 type.go:168] "Request Body" body=""
	I1212 19:51:30.697210   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:30.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.197427   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.197859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:31.697428   48438 type.go:168] "Request Body" body=""
	I1212 19:51:31.697506   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:31.697848   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:31.697924   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:32.196572   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.196639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.196896   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:32.696650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:32.696942   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.197000   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:33.696689   48438 type.go:168] "Request Body" body=""
	I1212 19:51:33.696760   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:33.697011   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:34.196690   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.196767   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.197161   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:34.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:34.696863   48438 type.go:168] "Request Body" body=""
	I1212 19:51:34.696936   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:34.697252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.196925   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:35.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:51:35.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:35.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:36.196823   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.196902   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.197231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:36.197287   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:36.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:51:36.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:36.696939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.196579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.196647   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.196985   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:37.696709   48438 type.go:168] "Request Body" body=""
	I1212 19:51:37.696787   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:37.697120   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.196638   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.196709   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.196961   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:38.696633   48438 type.go:168] "Request Body" body=""
	I1212 19:51:38.696706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:38.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:38.697136   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:39.196826   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.197247   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:39.696926   48438 type.go:168] "Request Body" body=""
	I1212 19:51:39.696993   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:39.697255   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.197304   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.197383   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.197713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:40.697536   48438 type.go:168] "Request Body" body=""
	I1212 19:51:40.697608   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:40.697930   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:40.697980   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:41.196802   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.196879   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.197213   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:41.696612   48438 type.go:168] "Request Body" body=""
	I1212 19:51:41.696684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:41.696972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.196646   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:42.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:42.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:42.696989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:43.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.196673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.197021   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:43.197077   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:43.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:51:43.696817   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:43.697134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.196553   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.196885   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:44.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:51:44.696676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:44.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:45.199987   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.200075   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.200389   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:45.200457   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:45.696961   48438 type.go:168] "Request Body" body=""
	I1212 19:51:45.697027   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:45.697297   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.197211   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.197284   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.197636   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:46.697445   48438 type.go:168] "Request Body" body=""
	I1212 19:51:46.697530   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:46.697884   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.196573   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:47.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:51:47.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:47.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:47.697055   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:48.196488   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.196562   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.196880   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:48.696551   48438 type.go:168] "Request Body" body=""
	I1212 19:51:48.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:48.696957   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.196623   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.197013   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:49.696738   48438 type.go:168] "Request Body" body=""
	I1212 19:51:49.696820   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:49.697179   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:49.697232   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:50.197074   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.197154   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.197448   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:50.697257   48438 type.go:168] "Request Body" body=""
	I1212 19:51:50.697328   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:50.697663   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.197208   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.197282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.197618   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:51.697235   48438 type.go:168] "Request Body" body=""
	I1212 19:51:51.697312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:51.697612   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:51.697676   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:52.197406   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.197485   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.197812   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:52.696535   48438 type.go:168] "Request Body" body=""
	I1212 19:51:52.696633   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:52.696945   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.196550   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:53.696615   48438 type.go:168] "Request Body" body=""
	I1212 19:51:53.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:53.697001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:54.196603   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.196699   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:54.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:54.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:51:54.696831   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:54.697099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.196853   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.197248   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:55.696613   48438 type.go:168] "Request Body" body=""
	I1212 19:51:55.696683   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:55.697052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:56.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.196930   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.197194   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:56.197240   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:56.696597   48438 type.go:168] "Request Body" body=""
	I1212 19:51:56.696681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:56.697030   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.196665   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.196998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:57.696676   48438 type.go:168] "Request Body" body=""
	I1212 19:51:57.696744   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:57.697019   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.196638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.196955   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:58.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:58.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:58.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:51:58.697049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:51:59.196681   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.196753   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.197032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:51:59.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:51:59.696659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:51:59.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.197205   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.197290   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.197625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:00.697067   48438 type.go:168] "Request Body" body=""
	I1212 19:52:00.697141   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:00.697476   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:00.697529   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:01.197420   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.197846   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:01.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:01.696637   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:01.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.196972   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:02.696570   48438 type.go:168] "Request Body" body=""
	I1212 19:52:02.696648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:02.696964   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:03.196609   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.197049   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:03.197103   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:03.696754   48438 type.go:168] "Request Body" body=""
	I1212 19:52:03.696832   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:03.697081   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.196628   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.196706   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.197052   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:04.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:04.696824   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:04.697154   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:05.196838   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.196927   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.197234   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:05.197290   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:05.696932   48438 type.go:168] "Request Body" body=""
	I1212 19:52:05.697009   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:05.697331   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.197237   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.197311   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.197634   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:06.697046   48438 type.go:168] "Request Body" body=""
	I1212 19:52:06.697120   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:06.697379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:07.197151   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.197514   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:07.197560   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:07.697323   48438 type.go:168] "Request Body" body=""
	I1212 19:52:07.697404   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:07.697708   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.197031   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.197097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:08.697148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:08.697227   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:08.697556   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:09.197392   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.197468   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.197784   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:09.197845   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:09.696549   48438 type.go:168] "Request Body" body=""
	I1212 19:52:09.696616   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:09.696887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.196962   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.197039   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.197334   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:10.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:52:10.696717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:10.697024   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.196847   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.196921   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.197227   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:11.696601   48438 type.go:168] "Request Body" body=""
	I1212 19:52:11.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:11.696981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:11.697032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:12.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.196650   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.196940   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:12.696545   48438 type.go:168] "Request Body" body=""
	I1212 19:52:12.696621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:12.696869   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.196568   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:13.696589   48438 type.go:168] "Request Body" body=""
	I1212 19:52:13.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:13.697006   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:13.697058   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:14.196560   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.196631   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.196946   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:14.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:52:14.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:14.697058   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.196740   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.197071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:15.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:52:15.696653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:15.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:16.196956   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.197033   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.197379   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:16.197433   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:16.696942   48438 type.go:168] "Request Body" body=""
	I1212 19:52:16.697013   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:16.697325   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.197029   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:17.697015   48438 type.go:168] "Request Body" body=""
	I1212 19:52:17.697084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:17.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.196629   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.196717   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:18.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:18.696628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:18.696875   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:18.696923   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:19.196580   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.196654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.196987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:19.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:52:19.696605   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:19.696921   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.196969   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.197044   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.197330   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:20.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:20.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:20.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:20.697054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:21.197019   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.197109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.197420   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:21.697065   48438 type.go:168] "Request Body" body=""
	I1212 19:52:21.697171   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:21.697471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.197327   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.197400   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.197732   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:22.697523   48438 type.go:168] "Request Body" body=""
	I1212 19:52:22.697602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:22.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:22.697961   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:23.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.196653   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.196911   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:23.696648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:23.696728   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:23.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.196615   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.197072   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:24.696554   48438 type.go:168] "Request Body" body=""
	I1212 19:52:24.696620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:24.696867   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:25.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.196634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.196989   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:25.197049   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:25.696745   48438 type.go:168] "Request Body" body=""
	I1212 19:52:25.696823   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:25.697176   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.197032   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.197104   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.197365   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:26.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:52:26.697207   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:26.697533   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:27.197240   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.197313   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.197651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:27.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:27.696997   48438 type.go:168] "Request Body" body=""
	I1212 19:52:27.697111   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:27.697348   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.197218   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.197538   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:28.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:28.697444   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:28.697821   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.197283   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.197351   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.197604   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:29.697409   48438 type.go:168] "Request Body" body=""
	I1212 19:52:29.697482   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:29.697829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:29.697881   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:30.196648   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.196718   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.197048   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:30.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:52:30.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:30.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.196917   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.196985   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.197286   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:31.696593   48438 type.go:168] "Request Body" body=""
	I1212 19:52:31.696671   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:31.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:32.196637   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.196716   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.196973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:32.197032   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:32.696666   48438 type.go:168] "Request Body" body=""
	I1212 19:52:32.696739   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:32.697092   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.196825   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.197340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:33.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:52:33.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:33.697364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:34.197120   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:34.197557   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:34.697300   48438 type.go:168] "Request Body" body=""
	I1212 19:52:34.697378   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:34.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.197158   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.197415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:35.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:35.697129   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:35.697418   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:36.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.197573   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:36.197628   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:36.697048   48438 type.go:168] "Request Body" body=""
	I1212 19:52:36.697115   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:36.697374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.197145   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:37.697363   48438 type.go:168] "Request Body" body=""
	I1212 19:52:37.697438   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:37.697758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:38.697121   48438 type.go:168] "Request Body" body=""
	I1212 19:52:38.697188   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:38.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:38.697564   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:39.197148   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.197221   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.197541   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:39.697045   48438 type.go:168] "Request Body" body=""
	I1212 19:52:39.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:39.697416   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.197422   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.197496   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.197841   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:40.696587   48438 type.go:168] "Request Body" body=""
	I1212 19:52:40.696660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:40.697003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:41.196830   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.197165   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:41.197208   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:41.696885   48438 type.go:168] "Request Body" body=""
	I1212 19:52:41.696962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:41.697302   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.197136   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.197480   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:42.697034   48438 type.go:168] "Request Body" body=""
	I1212 19:52:42.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:42.697359   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:43.197128   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.197206   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.197560   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:43.197616   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:43.697366   48438 type.go:168] "Request Body" body=""
	I1212 19:52:43.697437   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:43.697733   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.197049   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.197119   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.197383   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:44.697154   48438 type.go:168] "Request Body" body=""
	I1212 19:52:44.697224   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:44.697554   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:45.197418   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.197622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.198043   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:45.198111   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:45.696799   48438 type.go:168] "Request Body" body=""
	I1212 19:52:45.696866   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:45.697155   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.197330   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.197994   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:46.696797   48438 type.go:168] "Request Body" body=""
	I1212 19:52:46.696869   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:46.697189   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.196859   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.197254   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:47.696598   48438 type.go:168] "Request Body" body=""
	I1212 19:52:47.696688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:47.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:47.697081   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:48.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.196659   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.196981   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:48.696595   48438 type.go:168] "Request Body" body=""
	I1212 19:52:48.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:48.696958   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.196596   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.196668   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:49.696687   48438 type.go:168] "Request Body" body=""
	I1212 19:52:49.696757   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:49.697080   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:49.697134   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:50.197041   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.197117   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:52:50.697281   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:50.697595   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.197221   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.197312   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.197623   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:51.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:51.697142   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:51.697387   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:51.697429   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:52.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.197264   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.197590   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:52.697371   48438 type.go:168] "Request Body" body=""
	I1212 19:52:52.697445   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:52.697761   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.197099   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.197352   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:53.697175   48438 type.go:168] "Request Body" body=""
	I1212 19:52:53.697245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:53.697552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:53.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:54.197356   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.197428   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.197758   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:54.697050   48438 type.go:168] "Request Body" body=""
	I1212 19:52:54.697121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:54.697377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.197228   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.197547   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:55.697340   48438 type.go:168] "Request Body" body=""
	I1212 19:52:55.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:55.697762   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:55.697823   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:56.197164   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.197236   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.197494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:56.697202   48438 type.go:168] "Request Body" body=""
	I1212 19:52:56.697282   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:56.697569   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.197403   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.197743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:57.696985   48438 type.go:168] "Request Body" body=""
	I1212 19:52:57.697054   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:57.697293   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:58.196950   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.197019   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.197324   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:52:58.197379   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:52:58.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:52:58.697147   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:58.697456   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.196999   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.197066   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.197315   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:52:59.697129   48438 type.go:168] "Request Body" body=""
	I1212 19:52:59.697205   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:52:59.697493   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.196851   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.196939   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.197273   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:00.697007   48438 type.go:168] "Request Body" body=""
	I1212 19:53:00.697073   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:00.697327   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:00.697369   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:01.197219   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.197300   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.197664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:01.697490   48438 type.go:168] "Request Body" body=""
	I1212 19:53:01.697570   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:01.697887   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.196566   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.196640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.196991   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:02.696584   48438 type.go:168] "Request Body" body=""
	I1212 19:53:02.696662   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:02.696978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:03.196659   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.196736   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.197062   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:03.197116   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:03.696744   48438 type.go:168] "Request Body" body=""
	I1212 19:53:03.696816   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:03.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:04.696667   48438 type.go:168] "Request Body" body=""
	I1212 19:53:04.696738   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:04.697035   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.196625   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.196924   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:05.696636   48438 type.go:168] "Request Body" body=""
	I1212 19:53:05.696707   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:05.697025   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:05.697088   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:06.197081   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.197153   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.197462   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:06.697010   48438 type.go:168] "Request Body" body=""
	I1212 19:53:06.697082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:06.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.196664   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.197028   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:07.696603   48438 type.go:168] "Request Body" body=""
	I1212 19:53:07.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:07.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:08.196549   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.196615   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.196859   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:08.196896   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:08.696526   48438 type.go:168] "Request Body" body=""
	I1212 19:53:08.696594   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:08.696893   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.196605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.196693   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:09.696819   48438 type.go:168] "Request Body" body=""
	I1212 19:53:09.696900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:09.697219   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:10.197180   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.197269   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.197631   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:10.197708   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:10.697487   48438 type.go:168] "Request Body" body=""
	I1212 19:53:10.697560   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:10.697908   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.196944   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.197056   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.197357   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:11.696930   48438 type.go:168] "Request Body" body=""
	I1212 19:53:11.697002   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:11.697326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.196909   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.196979   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.197321   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:12.697013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:12.697077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:12.697339   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:12.697378   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:13.197093   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.197164   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.197492   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:13.697289   48438 type.go:168] "Request Body" body=""
	I1212 19:53:13.697359   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:13.697687   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.197038   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.197112   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.197374   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:14.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:53:14.697235   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:14.697577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:14.697635   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:15.197270   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.197347   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.197686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:15.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:15.697098   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:15.697375   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.197163   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.197234   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.197577   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:16.697350   48438 type.go:168] "Request Body" body=""
	I1212 19:53:16.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:16.697752   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:16.697808   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:17.197507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.197577   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.197829   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:17.696504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:17.696575   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:17.696899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.196504   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.196576   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.196901   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:18.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:18.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:18.696900   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:19.196582   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.197008   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:19.197061   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:19.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:19.696666   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:19.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.196979   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.197046   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.197295   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:20.696583   48438 type.go:168] "Request Body" body=""
	I1212 19:53:20.696657   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:20.696990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:21.196822   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.196900   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.197244   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:21.197296   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:21.696752   48438 type.go:168] "Request Body" body=""
	I1212 19:53:21.696826   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:21.697073   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.196577   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.196648   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.196951   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:22.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:53:22.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:22.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.196688   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:23.696698   48438 type.go:168] "Request Body" body=""
	I1212 19:53:23.696777   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:23.697096   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:23.697150   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:24.196817   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.196890   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.197211   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:24.696560   48438 type.go:168] "Request Body" body=""
	I1212 19:53:24.696634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:24.696929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.196601   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:25.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:25.696679   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:25.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:26.196896   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.197214   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:26.197253   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:26.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:26.696649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:26.696959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.196684   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:27.696544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:27.696619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:27.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.196507   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.196604   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.196939   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:28.696532   48438 type.go:168] "Request Body" body=""
	I1212 19:53:28.696610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:28.696931   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:28.696979   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:29.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.196703   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.197001   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:29.696592   48438 type.go:168] "Request Body" body=""
	I1212 19:53:29.696669   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:29.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.196961   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.197040   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:30.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:30.696640   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:30.696996   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:30.697048   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:31.197039   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.197435   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:31.697109   48438 type.go:168] "Request Body" body=""
	I1212 19:53:31.697183   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:31.697494   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.198094   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.198180   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.198485   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:32.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:32.697418   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:32.697743   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:32.697798   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:33.197520   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.197607   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.197978   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:33.696536   48438 type.go:168] "Request Body" body=""
	I1212 19:53:33.696612   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:33.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.196586   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.196660   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:34.696683   48438 type.go:168] "Request Body" body=""
	I1212 19:53:34.696755   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:34.697071   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:35.196545   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.196626   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:35.196957   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:35.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:35.696639   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:35.696968   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.197118   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.197425   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:36.697036   48438 type.go:168] "Request Body" body=""
	I1212 19:53:36.697109   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:36.697356   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:37.197167   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.197245   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.197543   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:37.197597   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:37.697255   48438 type.go:168] "Request Body" body=""
	I1212 19:53:37.697332   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:37.697651   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.197013   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.197090   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.197364   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:38.697110   48438 type.go:168] "Request Body" body=""
	I1212 19:53:38.697196   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:38.697532   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:39.197331   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.197405   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.197724   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:39.197779   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:39.697068   48438 type.go:168] "Request Body" body=""
	I1212 19:53:39.697132   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:39.697395   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.197348   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.197427   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.197783   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:40.697442   48438 type.go:168] "Request Body" body=""
	I1212 19:53:40.697518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:40.697857   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.196820   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.196897   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.197188   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:41.696606   48438 type.go:168] "Request Body" body=""
	I1212 19:53:41.696677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:41.696997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:41.697059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:42.199056   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.199156   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.199500   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:42.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:53:42.697106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:42.697363   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.197146   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.197216   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.197509   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:43.697051   48438 type.go:168] "Request Body" body=""
	I1212 19:53:43.697127   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:43.697442   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:43.697496   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:44.196983   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.197047   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.197291   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:44.696605   48438 type.go:168] "Request Body" body=""
	I1212 19:53:44.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:44.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.196640   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.196734   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.197134   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:45.696663   48438 type.go:168] "Request Body" body=""
	I1212 19:53:45.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:45.696987   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:46.196891   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.196961   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.197246   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:46.197291   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:46.696569   48438 type.go:168] "Request Body" body=""
	I1212 19:53:46.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:46.696980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.196530   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.196610   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.196909   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:47.696574   48438 type.go:168] "Request Body" body=""
	I1212 19:53:47.696642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:47.696962   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.196628   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.196975   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:48.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:53:48.696581   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:48.696845   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:48.696887   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:49.196542   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.196622   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.196995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:49.696556   48438 type.go:168] "Request Body" body=""
	I1212 19:53:49.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:49.696954   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.196908   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.196982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.197236   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:50.696599   48438 type.go:168] "Request Body" body=""
	I1212 19:53:50.696673   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:50.696998   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:50.697100   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:51.197065   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.197137   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.197471   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:51.697096   48438 type.go:168] "Request Body" body=""
	I1212 19:53:51.697167   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:51.697415   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.197178   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.197249   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.197545   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:52.697252   48438 type.go:168] "Request Body" body=""
	I1212 19:53:52.697323   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:52.697637   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:52.697692   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:53.197047   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.197114   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.197377   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:53.697133   48438 type.go:168] "Request Body" body=""
	I1212 19:53:53.697217   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:53.697511   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.197195   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.197316   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.197626   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:54.697029   48438 type.go:168] "Request Body" body=""
	I1212 19:53:54.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:54.697384   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:55.197154   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.197226   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.197534   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:55.197594   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:55.697076   48438 type.go:168] "Request Body" body=""
	I1212 19:53:55.697150   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:55.697464   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.197358   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.197424   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.197682   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:56.697448   48438 type.go:168] "Request Body" body=""
	I1212 19:53:56.697524   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:56.697853   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.196589   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.196672   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.197005   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:57.696668   48438 type.go:168] "Request Body" body=""
	I1212 19:53:57.696743   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:57.697044   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:53:57.697102   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:53:58.196594   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.196721   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:58.696739   48438 type.go:168] "Request Body" body=""
	I1212 19:53:58.696813   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:58.697128   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.196544   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.196620   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.196916   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:53:59.696616   48438 type.go:168] "Request Body" body=""
	I1212 19:53:59.696690   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:53:59.696999   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:00.196755   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.196856   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.197201   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:00.197255   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:00.696903   48438 type.go:168] "Request Body" body=""
	I1212 19:54:00.696982   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:00.697296   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.197188   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.197260   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.197599   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:01.697267   48438 type.go:168] "Request Body" body=""
	I1212 19:54:01.697339   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:01.697686   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:02.197043   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.197122   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:02.197430   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:02.697170   48438 type.go:168] "Request Body" body=""
	I1212 19:54:02.697265   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:02.697621   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.197435   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.197518   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.197849   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:03.696519   48438 type.go:168] "Request Body" body=""
	I1212 19:54:03.696591   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:03.696894   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.196681   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.197029   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:04.696731   48438 type.go:168] "Request Body" body=""
	I1212 19:54:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:04.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:04.697174   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:05.196541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.196959   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:05.696572   48438 type.go:168] "Request Body" body=""
	I1212 19:54:05.696651   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:05.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.196971   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.197050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.197372   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:06.696982   48438 type.go:168] "Request Body" body=""
	I1212 19:54:06.697050   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:06.697313   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:06.697353   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:07.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.197223   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.197552   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:07.697346   48438 type.go:168] "Request Body" body=""
	I1212 19:54:07.697416   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:07.697736   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.197037   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.197113   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.197390   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:08.697159   48438 type.go:168] "Request Body" body=""
	I1212 19:54:08.697238   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:08.697572   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:08.697622   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:09.197260   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.197335   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.197650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:09.697011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:09.697085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:09.697367   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.197549   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.197634   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.197971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:10.696564   48438 type.go:168] "Request Body" body=""
	I1212 19:54:10.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:10.696971   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:11.196845   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.196925   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.197172   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:11.197214   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:11.696846   48438 type.go:168] "Request Body" body=""
	I1212 19:54:11.696918   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:11.697216   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.196612   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.197027   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:12.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:12.696638   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:12.696933   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.196634   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.196725   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.197087   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:13.696789   48438 type.go:168] "Request Body" body=""
	I1212 19:54:13.696882   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:13.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:13.697285   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:14.196910   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.196976   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.197328   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:14.697114   48438 type.go:168] "Request Body" body=""
	I1212 19:54:14.697187   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:14.697517   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.197329   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.197401   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.197739   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:15.697022   48438 type.go:168] "Request Body" body=""
	I1212 19:54:15.697095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:15.697438   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:15.697494   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:16.197185   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.197263   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.197574   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:16.697365   48438 type.go:168] "Request Body" body=""
	I1212 19:54:16.697441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:16.697760   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.197010   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.197077   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.197323   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:17.696609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:17.696678   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:17.696995   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:18.196608   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.196691   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.197012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:18.197067   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:18.696737   48438 type.go:168] "Request Body" body=""
	I1212 19:54:18.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:18.697100   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.196598   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.196675   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.196990   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:19.696701   48438 type.go:168] "Request Body" body=""
	I1212 19:54:19.696780   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:19.697061   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.196624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.196899   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:20.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:20.696682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:20.697017   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:20.697069   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:21.196889   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.196962   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.197310   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:21.696541   48438 type.go:168] "Request Body" body=""
	I1212 19:54:21.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:21.696897   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.196588   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.196947   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:22.696626   48438 type.go:168] "Request Body" body=""
	I1212 19:54:22.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:22.697034   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:22.697092   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:23.196546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.196618   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.196862   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:23.696546   48438 type.go:168] "Request Body" body=""
	I1212 19:54:23.696624   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:23.696934   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.196592   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.197022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:24.696534   48438 type.go:168] "Request Body" body=""
	I1212 19:54:24.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:24.696904   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:25.196574   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.196649   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.196992   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:25.197054   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:25.696715   48438 type.go:168] "Request Body" body=""
	I1212 19:54:25.696805   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:25.697123   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.197072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.197139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.197388   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:26.697204   48438 type.go:168] "Request Body" body=""
	I1212 19:54:26.697275   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:26.697575   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:27.197337   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.197409   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.197721   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:27.197781   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:27.697027   48438 type.go:168] "Request Body" body=""
	I1212 19:54:27.697097   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:27.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.197152   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.197230   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.197559   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:28.697347   48438 type.go:168] "Request Body" body=""
	I1212 19:54:28.697417   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:28.697713   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.197011   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.197084   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.197381   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:29.697150   48438 type.go:168] "Request Body" body=""
	I1212 19:54:29.697222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:29.697555   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:29.697607   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:30.197366   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.197441   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.197781   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:30.696486   48438 type.go:168] "Request Body" body=""
	I1212 19:54:30.696556   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:30.696811   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.196900   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.196971   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.197252   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:31.696927   48438 type.go:168] "Request Body" body=""
	I1212 19:54:31.697006   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:31.697340   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:32.196886   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.196974   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.197251   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:32.197302   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:32.696579   48438 type.go:168] "Request Body" body=""
	I1212 19:54:32.696652   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:32.696967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.196682   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.196752   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.197083   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:33.696765   48438 type.go:168] "Request Body" body=""
	I1212 19:54:33.696829   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:33.697124   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.196597   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.196667   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.197010   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:34.696713   48438 type.go:168] "Request Body" body=""
	I1212 19:54:34.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:34.697098   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:34.697159   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:35.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.196677   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.197023   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:35.696607   48438 type.go:168] "Request Body" body=""
	I1212 19:54:35.696685   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:35.697032   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.197131   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.197248   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.197583   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:36.697028   48438 type.go:168] "Request Body" body=""
	I1212 19:54:36.697092   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:36.697333   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:36.697376   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:37.197121   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.197202   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.197549   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:37.697356   48438 type.go:168] "Request Body" body=""
	I1212 19:54:37.697425   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:37.697755   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.196491   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.196580   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.196847   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:38.696555   48438 type.go:168] "Request Body" body=""
	I1212 19:54:38.696630   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:38.697022   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:39.196611   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.196682   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.196997   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:39.197044   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:39.696650   48438 type.go:168] "Request Body" body=""
	I1212 19:54:39.696714   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:39.696973   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.197051   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.197133   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.197510   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:40.697348   48438 type.go:168] "Request Body" body=""
	I1212 19:54:40.697434   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:40.697779   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:41.197126   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.197191   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.197489   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:41.197543   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:41.697273   48438 type.go:168] "Request Body" body=""
	I1212 19:54:41.697350   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:41.697678   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.197609   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.197692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.198720   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:42.697042   48438 type.go:168] "Request Body" body=""
	I1212 19:54:42.697110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:42.697353   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:43.197138   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.197208   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.197507   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:43.197562   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:43.697072   48438 type.go:168] "Request Body" body=""
	I1212 19:54:43.697139   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:43.697491   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.197018   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.197082   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.197326   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:44.696568   48438 type.go:168] "Request Body" body=""
	I1212 19:54:44.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:44.696984   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:45.196739   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.196924   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.201386   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1212 19:54:45.201507   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:45.697031   48438 type.go:168] "Request Body" body=""
	I1212 19:54:45.697100   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:45.697337   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.197134   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.197222   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.197531   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:46.697301   48438 type.go:168] "Request Body" body=""
	I1212 19:54:46.697388   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:46.697735   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.197052   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.197121   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.197422   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:47.697238   48438 type.go:168] "Request Body" body=""
	I1212 19:54:47.697317   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:47.697650   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:47.697707   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:48.197476   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.197548   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.197868   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:48.696528   48438 type.go:168] "Request Body" body=""
	I1212 19:54:48.696600   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:48.696881   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.196620   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.196696   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.197016   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:49.696697   48438 type.go:168] "Request Body" body=""
	I1212 19:54:49.696774   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:49.697075   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:50.197033   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.197106   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.197414   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:50.197468   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:50.697208   48438 type.go:168] "Request Body" body=""
	I1212 19:54:50.697277   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:50.697625   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.197524   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.197596   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.197883   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:51.696529   48438 type.go:168] "Request Body" body=""
	I1212 19:54:51.696602   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:51.696953   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.196625   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.196695   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.197003   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:52.696563   48438 type.go:168] "Request Body" body=""
	I1212 19:54:52.696636   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:52.696938   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:52.696988   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:53.196618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.196689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.196965   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:53.696623   48438 type.go:168] "Request Body" body=""
	I1212 19:54:53.696694   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:53.697045   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.196759   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.196833   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.197151   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:54.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:54.696603   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:54.696895   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:55.196600   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.196688   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.196967   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:55.197009   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:55.696707   48438 type.go:168] "Request Body" body=""
	I1212 19:54:55.696782   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:55.697095   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.197044   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.197110   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.197358   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:56.697176   48438 type.go:168] "Request Body" body=""
	I1212 19:54:56.697247   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:56.697564   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:57.197362   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.197443   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.197770   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:57.197827   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:54:57.696510   48438 type.go:168] "Request Body" body=""
	I1212 19:54:57.696582   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:57.696850   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.196621   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.196910   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:58.696537   48438 type.go:168] "Request Body" body=""
	I1212 19:54:58.696617   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:58.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.196540   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.196642   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.196980   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:54:59.696618   48438 type.go:168] "Request Body" body=""
	I1212 19:54:59.696689   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:54:59.697012   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:54:59.697072   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:00.196536   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.196632   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.196977   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:00.696665   48438 type.go:168] "Request Body" body=""
	I1212 19:55:00.696746   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:00.697082   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.197001   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.197085   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.197440   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:01.697258   48438 type.go:168] "Request Body" body=""
	I1212 19:55:01.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:01.697671   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:01.697735   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:02.197006   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.197095   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.197408   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:02.697262   48438 type.go:168] "Request Body" body=""
	I1212 19:55:02.697333   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:02.697664   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.197461   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.197544   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.197886   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:03.696539   48438 type.go:168] "Request Body" body=""
	I1212 19:55:03.696609   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:03.696903   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:04.196604   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.196692   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.197007   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:04.197059   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:04.696722   48438 type.go:168] "Request Body" body=""
	I1212 19:55:04.696801   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:04.697084   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.196551   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.196619   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.196920   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:05.696558   48438 type.go:168] "Request Body" body=""
	I1212 19:55:05.696654   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:05.696970   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:06.196854   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.196928   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.197258   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:06.197306   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:06.696660   48438 type.go:168] "Request Body" body=""
	I1212 19:55:06.696733   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:06.696983   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.196575   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.196663   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.197112   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:07.696611   48438 type.go:168] "Request Body" body=""
	I1212 19:55:07.696697   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:07.697039   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.196559   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.196627   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.196929   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:08.696573   48438 type.go:168] "Request Body" body=""
	I1212 19:55:08.696643   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:08.696979   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:08.697031   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:09.196708   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.196785   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.197099   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:09.696682   48438 type.go:168] "Request Body" body=""
	I1212 19:55:09.696750   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:09.697054   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.196593   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.196676   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.197018   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 19:55:10.696766   48438 type.go:168] "Request Body" body=""
	I1212 19:55:10.696855   48438 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384006" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 19:55:10.697231   48438 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 19:55:10.697295   48438 node_ready.go:55] error getting node "functional-384006" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384006": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 19:55:11.196994   48438 node_ready.go:38] duration metric: took 6m0.000614517s for node "functional-384006" to be "Ready" ...
	I1212 19:55:11.200166   48438 out.go:203] 
	W1212 19:55:11.203009   48438 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 19:55:11.203186   48438 out.go:285] * 
	W1212 19:55:11.205457   48438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 19:55:11.208306   48438 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:18 functional-384006 containerd[5201]: time="2025-12-12T19:55:18.619140208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.705697748Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.707955408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.719148065Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:19 functional-384006 containerd[5201]: time="2025-12-12T19:55:19.719536310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.684590079Z" level=info msg="No images store for sha256:4e39f883043c3ea8a37d0151562bc1cf505db5f8a8ba3972284f6e3644631f36"
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.686788434Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-384006\""
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.693309100Z" level=info msg="ImageCreate event name:\"sha256:5661f32bede572b676872cc804975f90ff6296cfb902f98dcfd0a018d5cab590\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:20 functional-384006 containerd[5201]: time="2025-12-12T19:55:20.696414976Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-384006\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.489496865Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.492043704Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.494111045Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 12 19:55:21 functional-384006 containerd[5201]: time="2025-12-12T19:55:21.505642415Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.560550825Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.563176029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.571298837Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.571997340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.594263603Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.596664232Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.598641607Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.606795913Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.732339413Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.734512440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.741473287Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 19:55:22 functional-384006 containerd[5201]: time="2025-12-12T19:55:22.741895607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:55:26.893463    9321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:26.893862    9321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:26.895512    9321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:26.896267    9321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:55:26.897838    9321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 19:55:26 up 37 min,  0 user,  load average: 0.13, 0.23, 0.54
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 19:55:23 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:24 functional-384006 kubelet[9100]: E1212 19:55:24.004007    9100 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 12 19:55:24 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:24 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:24 functional-384006 kubelet[9197]: E1212 19:55:24.761702    9197 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:24 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:25 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 12 19:55:25 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:25 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:25 functional-384006 kubelet[9214]: E1212 19:55:25.511181    9214 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:25 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:25 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:26 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 12 19:55:26 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:26 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:26 functional-384006 kubelet[9237]: E1212 19:55:26.261991    9237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 19:55:26 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 19:55:26 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 19:55:26 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 12 19:55:26 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 19:55:26 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (333.751492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-384006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 19:57:22.897049    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:59:51.906460    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:01:14.971862    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:02:22.896662    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:04:51.910642    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:07:22.896792    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-384006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.61686822s)

                                                
                                                
-- stdout --
	* [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000900998s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-384006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.620360963s for "functional-384006" cluster.
I1212 20:07:40.430717    4120 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (298.013214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-008271 image ls --format yaml --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format json --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format table --alsologtostderr                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh     │ functional-008271 ssh pgrep buildkitd                                                                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image   │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                  │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls                                                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete  │ -p functional-008271                                                                                                                                    │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start   │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ start   │ -p functional-384006 --alsologtostderr -v=8                                                                                                             │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:49 UTC │                     │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:latest                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add minikube-local-cache-test:functional-384006                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache delete minikube-local-cache-test:functional-384006                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl images                                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ cache   │ functional-384006 cache reload                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ kubectl │ functional-384006 kubectl -- --context functional-384006 get pods                                                                                       │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ start   │ -p functional-384006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:55:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:55:27.852724   54219 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:55:27.853298   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853302   54219 out.go:374] Setting ErrFile to fd 2...
	I1212 19:55:27.853307   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853572   54219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:55:27.853965   54219 out.go:368] Setting JSON to false
	I1212 19:55:27.854729   54219 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2277,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:55:27.854784   54219 start.go:143] virtualization:  
	I1212 19:55:27.858422   54219 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:55:27.861585   54219 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:55:27.861670   54219 notify.go:221] Checking for updates...
	I1212 19:55:27.868224   54219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:55:27.871239   54219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:55:27.874218   54219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:55:27.877241   54219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:55:27.880290   54219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:55:27.883683   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:27.883824   54219 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:55:27.904994   54219 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:55:27.905107   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:27.972320   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:27.96314904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:27.972416   54219 docker.go:319] overlay module found
	I1212 19:55:27.975641   54219 out.go:179] * Using the docker driver based on existing profile
	I1212 19:55:27.978549   54219 start.go:309] selected driver: docker
	I1212 19:55:27.978557   54219 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:27.978631   54219 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:55:27.978726   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:28.035973   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:28.026224666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:28.036393   54219 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:55:28.036415   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:28.036463   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:28.036537   54219 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:28.039865   54219 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:55:28.042798   54219 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:55:28.046082   54219 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:55:28.048968   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:28.049006   54219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:55:28.049015   54219 cache.go:65] Caching tarball of preloaded images
	I1212 19:55:28.049057   54219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:55:28.049116   54219 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:55:28.049125   54219 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:55:28.049240   54219 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:55:28.070140   54219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:55:28.070152   54219 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:55:28.070172   54219 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:55:28.070201   54219 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:55:28.070267   54219 start.go:364] duration metric: took 47.145µs to acquireMachinesLock for "functional-384006"
	I1212 19:55:28.070285   54219 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:55:28.070289   54219 fix.go:54] fixHost starting: 
	I1212 19:55:28.070558   54219 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:55:28.087483   54219 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:55:28.087503   54219 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:55:28.090814   54219 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:55:28.090839   54219 machine.go:94] provisionDockerMachine start ...
	I1212 19:55:28.090929   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.108521   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.108845   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.108851   54219 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:55:28.259057   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.259071   54219 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:55:28.259129   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.275402   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.275704   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.275713   54219 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:55:28.436755   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.436820   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.461420   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.461717   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.461739   54219 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:55:28.612044   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:55:28.612060   54219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:55:28.612075   54219 ubuntu.go:190] setting up certificates
	I1212 19:55:28.612092   54219 provision.go:84] configureAuth start
	I1212 19:55:28.612163   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:28.632765   54219 provision.go:143] copyHostCerts
	I1212 19:55:28.632832   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:55:28.632839   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:55:28.632906   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:55:28.633087   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:55:28.633091   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:55:28.633116   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:55:28.633174   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:55:28.633178   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:55:28.633202   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:55:28.633253   54219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:55:28.793482   54219 provision.go:177] copyRemoteCerts
	I1212 19:55:28.793529   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:55:28.793567   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.810312   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:28.915572   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:55:28.933605   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:55:28.951138   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:55:28.968522   54219 provision.go:87] duration metric: took 356.418282ms to configureAuth
	I1212 19:55:28.968541   54219 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:55:28.968740   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:28.968745   54219 machine.go:97] duration metric: took 877.902402ms to provisionDockerMachine
	I1212 19:55:28.968752   54219 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:55:28.968762   54219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:55:28.968808   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:55:28.968851   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.987014   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.092173   54219 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:55:29.095606   54219 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:55:29.095622   54219 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:55:29.095634   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:55:29.095686   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:55:29.095770   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:55:29.095858   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:55:29.095909   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:55:29.103304   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:29.119777   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:55:29.137094   54219 start.go:296] duration metric: took 168.327905ms for postStartSetup
	I1212 19:55:29.137179   54219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:55:29.137221   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.155438   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.256753   54219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:55:29.261489   54219 fix.go:56] duration metric: took 1.191194255s for fixHost
	I1212 19:55:29.261504   54219 start.go:83] releasing machines lock for "functional-384006", held for 1.19123098s
	I1212 19:55:29.261570   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:29.278501   54219 ssh_runner.go:195] Run: cat /version.json
	I1212 19:55:29.278542   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.278786   54219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:55:29.278838   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.300866   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.303322   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.403647   54219 ssh_runner.go:195] Run: systemctl --version
	I1212 19:55:29.503423   54219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:55:29.507672   54219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:55:29.507733   54219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:55:29.515681   54219 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:55:29.515695   54219 start.go:496] detecting cgroup driver to use...
	I1212 19:55:29.515726   54219 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:55:29.515780   54219 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:55:29.531132   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:55:29.543869   54219 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:55:29.543922   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:55:29.559268   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:55:29.572058   54219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:55:29.685297   54219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:55:29.805225   54219 docker.go:234] disabling docker service ...
	I1212 19:55:29.805279   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:55:29.822098   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:55:29.834865   54219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:55:29.949324   54219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:55:30.087483   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:55:30.100955   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:55:30.116237   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:55:30.126127   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:55:30.136085   54219 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:55:30.136147   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:55:30.145914   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.154991   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:55:30.163972   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.172470   54219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:55:30.180930   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:55:30.190361   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:55:30.199337   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:55:30.208975   54219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:55:30.216623   54219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:55:30.223993   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.330122   54219 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:55:30.473295   54219 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:55:30.473369   54219 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:55:30.477639   54219 start.go:564] Will wait 60s for crictl version
	I1212 19:55:30.477693   54219 ssh_runner.go:195] Run: which crictl
	I1212 19:55:30.481548   54219 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:55:30.504633   54219 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:55:30.504687   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.523789   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.548955   54219 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:55:30.551786   54219 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:55:30.567944   54219 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:55:30.574767   54219 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 19:55:30.577669   54219 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:55:30.577791   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:30.577868   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.602150   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.602162   54219 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:55:30.602217   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.625907   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.625919   54219 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:55:30.625925   54219 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:55:30.626026   54219 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:55:30.626113   54219 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:55:30.649188   54219 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 19:55:30.649208   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:30.649216   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:30.649224   54219 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:55:30.649244   54219 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:55:30.649349   54219 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:55:30.649412   54219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:55:30.656757   54219 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:55:30.656810   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:55:30.663814   54219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:55:30.675878   54219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:55:30.688262   54219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 19:55:30.703971   54219 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:55:30.708408   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.839166   54219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:55:31.445221   54219 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:55:31.445232   54219 certs.go:195] generating shared ca certs ...
	I1212 19:55:31.445248   54219 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:55:31.445419   54219 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:55:31.445478   54219 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:55:31.445485   54219 certs.go:257] generating profile certs ...
	I1212 19:55:31.445581   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:55:31.445645   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:55:31.445694   54219 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:55:31.445823   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:55:31.445865   54219 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:55:31.445873   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:55:31.445899   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:55:31.445931   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:55:31.445954   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:55:31.446005   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:31.446654   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:55:31.468075   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:55:31.484808   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:55:31.501104   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:55:31.519018   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:55:31.536328   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:55:31.553581   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:55:31.570191   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:55:31.586954   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:55:31.603358   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:55:31.620509   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:55:31.637987   54219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:55:31.650484   54219 ssh_runner.go:195] Run: openssl version
	I1212 19:55:31.656450   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.663636   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:55:31.671141   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674842   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674900   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.715596   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:55:31.723059   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.730233   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:55:31.737626   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741161   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741213   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.783908   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:55:31.791542   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.799333   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:55:31.806999   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810570   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810630   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.851440   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:55:31.858926   54219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:55:31.862520   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:55:31.903666   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:55:31.944997   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:55:31.985858   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:55:32.026779   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:55:32.067925   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:55:32.110481   54219 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:32.110555   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:55:32.110624   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.136703   54219 cri.go:89] found id: ""
	I1212 19:55:32.136771   54219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:55:32.144223   54219 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:55:32.144262   54219 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:55:32.144312   54219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:55:32.151339   54219 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.151833   54219 kubeconfig.go:125] found "functional-384006" server: "https://192.168.49.2:8441"
	I1212 19:55:32.153024   54219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:55:32.160890   54219 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:40:57.602349197 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 19:55:30.697011388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 19:55:32.160901   54219 kubeadm.go:1161] stopping kube-system containers ...
	I1212 19:55:32.160919   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 19:55:32.160971   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.185826   54219 cri.go:89] found id: ""
	I1212 19:55:32.185884   54219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 19:55:32.204086   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:55:32.212130   54219 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 19:45 /etc/kubernetes/scheduler.conf
	
	I1212 19:55:32.212191   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:55:32.219934   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:55:32.227897   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.227949   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:55:32.235243   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.242858   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.242920   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.250701   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:55:32.258298   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.258372   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:55:32.265710   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:55:32.273454   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:32.324121   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:33.892385   54219 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568235814s)
	I1212 19:55:33.892459   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.100445   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.171354   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.217083   54219 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:55:34.217158   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:34.717278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.217351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.717787   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.217788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.218074   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.717373   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.218212   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.717990   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.217746   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.717717   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.217500   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.718081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.217959   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.717497   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.218218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.217997   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.217978   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.717885   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.217387   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.718121   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.217288   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.217318   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.717728   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.218067   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.717326   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.217512   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.717353   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.717983   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.217333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.717999   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.217773   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.717402   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.217334   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.218070   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.717712   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.217290   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.718107   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.217424   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.717836   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.217448   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.217955   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.717942   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.218252   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.717973   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.218214   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.718129   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.217818   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.218222   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.717312   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.217601   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.717316   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.217287   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.718088   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.717294   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.218217   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.717867   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.717349   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.217366   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.717546   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.218108   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.717381   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.217293   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.717333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.217921   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.717764   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.217784   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.718179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.218229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.717368   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.217920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.717247   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.218046   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.717383   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.218006   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.718040   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.217291   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.717910   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.218203   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.717788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.217278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.718149   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.217534   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.717322   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.218045   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.717355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.218081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.218208   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.717289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.217232   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.717930   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.218161   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.718192   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.217327   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.717452   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.218230   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.217306   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.717853   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.218101   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.717649   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.218027   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.718035   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.218050   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.717819   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.217245   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.717370   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:34.217941   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:34.218012   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:34.255372   54219 cri.go:89] found id: ""
	I1212 19:56:34.255386   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.255399   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:34.255404   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:34.255464   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:34.281284   54219 cri.go:89] found id: ""
	I1212 19:56:34.281297   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.281303   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:34.281308   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:34.281363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:34.304259   54219 cri.go:89] found id: ""
	I1212 19:56:34.304273   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.304279   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:34.304284   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:34.304338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:34.327600   54219 cri.go:89] found id: ""
	I1212 19:56:34.327613   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.327620   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:34.327625   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:34.327678   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:34.352303   54219 cri.go:89] found id: ""
	I1212 19:56:34.352317   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.352323   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:34.352328   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:34.352385   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:34.375938   54219 cri.go:89] found id: ""
	I1212 19:56:34.375951   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.375958   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:34.375963   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:34.376019   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:34.399635   54219 cri.go:89] found id: ""
	I1212 19:56:34.399648   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.399655   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:34.399663   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:34.399675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:34.457482   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:34.457501   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:34.467864   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:34.467879   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:34.532394   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:34.532405   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:34.532415   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:34.595426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:34.595444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:37.126278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:37.136103   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:37.136162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:37.160403   54219 cri.go:89] found id: ""
	I1212 19:56:37.160416   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.160422   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:37.160428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:37.160483   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:37.184487   54219 cri.go:89] found id: ""
	I1212 19:56:37.184500   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.184507   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:37.184512   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:37.184582   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:37.226352   54219 cri.go:89] found id: ""
	I1212 19:56:37.226366   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.226373   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:37.226378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:37.226435   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:37.258223   54219 cri.go:89] found id: ""
	I1212 19:56:37.258267   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.258274   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:37.258280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:37.258349   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:37.285540   54219 cri.go:89] found id: ""
	I1212 19:56:37.285554   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.285561   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:37.285566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:37.285622   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:37.309113   54219 cri.go:89] found id: ""
	I1212 19:56:37.309126   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.309132   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:37.309147   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:37.309226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:37.332041   54219 cri.go:89] found id: ""
	I1212 19:56:37.332054   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.332061   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:37.332069   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:37.332079   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:37.387421   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:37.387440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:37.397657   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:37.397672   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:37.461255   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:37.461265   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:37.461275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:37.523429   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:37.523446   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:40.054218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:40.066551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:40.066620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:40.099245   54219 cri.go:89] found id: ""
	I1212 19:56:40.099260   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.099267   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:40.099273   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:40.099336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:40.127637   54219 cri.go:89] found id: ""
	I1212 19:56:40.127653   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.127660   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:40.127666   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:40.127728   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:40.154877   54219 cri.go:89] found id: ""
	I1212 19:56:40.154892   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.154899   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:40.154904   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:40.154966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:40.186457   54219 cri.go:89] found id: ""
	I1212 19:56:40.186471   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.186478   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:40.186483   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:40.186540   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:40.223505   54219 cri.go:89] found id: ""
	I1212 19:56:40.223520   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.223527   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:40.223532   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:40.223589   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:40.264967   54219 cri.go:89] found id: ""
	I1212 19:56:40.264981   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.264987   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:40.264992   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:40.265064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:40.288851   54219 cri.go:89] found id: ""
	I1212 19:56:40.288865   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.288871   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:40.288879   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:40.288889   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:40.345104   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:40.345122   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:40.355393   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:40.355408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:40.421074   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:40.421086   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:40.421100   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:40.484292   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:40.484310   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:43.012558   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:43.022764   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:43.022820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:43.046602   54219 cri.go:89] found id: ""
	I1212 19:56:43.046617   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.046623   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:43.046628   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:43.046688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:43.070683   54219 cri.go:89] found id: ""
	I1212 19:56:43.070697   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.070703   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:43.070715   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:43.070769   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:43.094890   54219 cri.go:89] found id: ""
	I1212 19:56:43.094904   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.094911   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:43.094915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:43.094971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:43.123965   54219 cri.go:89] found id: ""
	I1212 19:56:43.123978   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.123984   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:43.123989   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:43.124043   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:43.149003   54219 cri.go:89] found id: ""
	I1212 19:56:43.149017   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.149024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:43.149028   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:43.149084   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:43.177565   54219 cri.go:89] found id: ""
	I1212 19:56:43.177578   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.177584   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:43.177589   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:43.177654   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:43.203765   54219 cri.go:89] found id: ""
	I1212 19:56:43.203779   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.203785   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:43.203793   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:43.203803   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:43.267789   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:43.267807   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:43.278476   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:43.278493   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:43.342414   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:43.342426   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:43.342436   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:43.406378   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:43.406398   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:45.939180   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:45.950923   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:45.950984   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:45.980081   54219 cri.go:89] found id: ""
	I1212 19:56:45.980095   54219 logs.go:282] 0 containers: []
	W1212 19:56:45.980102   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:45.980106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:45.980162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:46.008401   54219 cri.go:89] found id: ""
	I1212 19:56:46.008417   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.008425   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:46.008431   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:46.008500   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:46.037350   54219 cri.go:89] found id: ""
	I1212 19:56:46.037364   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.037382   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:46.037388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:46.037447   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:46.062477   54219 cri.go:89] found id: ""
	I1212 19:56:46.062491   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.062498   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:46.062503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:46.062562   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:46.088314   54219 cri.go:89] found id: ""
	I1212 19:56:46.088328   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.088335   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:46.088340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:46.088397   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:46.118483   54219 cri.go:89] found id: ""
	I1212 19:56:46.118496   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.118503   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:46.118513   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:46.118574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:46.142723   54219 cri.go:89] found id: ""
	I1212 19:56:46.142737   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.142744   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:46.142752   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:46.142773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:46.213691   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:46.213700   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:46.213710   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:46.286149   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:46.286168   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:46.313728   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:46.313743   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:46.372694   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:46.372711   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:48.883344   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:48.893476   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:48.893532   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:48.917365   54219 cri.go:89] found id: ""
	I1212 19:56:48.917379   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.917386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:48.917391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:48.917446   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:48.941342   54219 cri.go:89] found id: ""
	I1212 19:56:48.941356   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.941363   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:48.941367   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:48.941428   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:48.966988   54219 cri.go:89] found id: ""
	I1212 19:56:48.967001   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.967008   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:48.967013   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:48.967070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:48.990387   54219 cri.go:89] found id: ""
	I1212 19:56:48.990400   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.990407   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:48.990412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:48.990474   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:49.016237   54219 cri.go:89] found id: ""
	I1212 19:56:49.016251   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.016257   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:49.016263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:49.016334   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:49.040263   54219 cri.go:89] found id: ""
	I1212 19:56:49.040276   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.040283   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:49.040289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:49.040346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:49.064604   54219 cri.go:89] found id: ""
	I1212 19:56:49.064618   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.064625   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:49.064633   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:49.064643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:49.122132   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:49.122150   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:49.132901   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:49.132916   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:49.203010   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:49.203028   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:49.203038   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:49.277223   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:49.277242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:51.807432   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:51.817646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:51.817706   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:51.843424   54219 cri.go:89] found id: ""
	I1212 19:56:51.843438   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.843444   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:51.843449   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:51.843510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:51.868210   54219 cri.go:89] found id: ""
	I1212 19:56:51.868223   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.868230   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:51.868235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:51.868290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:51.892493   54219 cri.go:89] found id: ""
	I1212 19:56:51.892506   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.892513   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:51.892518   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:51.892577   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:51.917111   54219 cri.go:89] found id: ""
	I1212 19:56:51.917124   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.917143   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:51.917148   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:51.917203   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:51.945367   54219 cri.go:89] found id: ""
	I1212 19:56:51.945381   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.945387   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:51.945392   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:51.945449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:51.970026   54219 cri.go:89] found id: ""
	I1212 19:56:51.970040   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.970047   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:51.970053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:51.970108   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:51.994534   54219 cri.go:89] found id: ""
	I1212 19:56:51.994547   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.994553   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:51.994563   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:51.994573   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:52.028818   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:52.028848   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:52.090429   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:52.090450   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:52.101879   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:52.101895   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:52.171776   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:52.171787   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:52.171800   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:54.740626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:54.750925   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:54.750995   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:54.780366   54219 cri.go:89] found id: ""
	I1212 19:56:54.780379   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.780386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:54.780391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:54.780449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:54.804094   54219 cri.go:89] found id: ""
	I1212 19:56:54.804107   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.804113   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:54.804118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:54.804173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:54.828262   54219 cri.go:89] found id: ""
	I1212 19:56:54.828276   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.828283   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:54.828288   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:54.828346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:54.851328   54219 cri.go:89] found id: ""
	I1212 19:56:54.851340   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.851347   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:54.851352   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:54.851406   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:54.874948   54219 cri.go:89] found id: ""
	I1212 19:56:54.874971   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.874978   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:54.874983   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:54.875049   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:54.899059   54219 cri.go:89] found id: ""
	I1212 19:56:54.899072   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.899079   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:54.899085   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:54.899139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:54.922912   54219 cri.go:89] found id: ""
	I1212 19:56:54.922944   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.922952   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:54.922959   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:54.922969   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:54.982944   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:54.982963   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:54.993620   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:54.993643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:55.063883   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:55.063895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:55.063905   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:55.126641   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:55.126661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:57.654341   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:57.664332   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:57.664398   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:57.690295   54219 cri.go:89] found id: ""
	I1212 19:56:57.690312   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.690319   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:57.690324   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:57.690378   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:57.715389   54219 cri.go:89] found id: ""
	I1212 19:56:57.715403   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.715409   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:57.715414   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:57.715485   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:57.741214   54219 cri.go:89] found id: ""
	I1212 19:56:57.741228   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.741234   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:57.741239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:57.741302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:57.766791   54219 cri.go:89] found id: ""
	I1212 19:56:57.766804   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.766811   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:57.766817   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:57.766876   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:57.791413   54219 cri.go:89] found id: ""
	I1212 19:56:57.791427   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.791434   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:57.791439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:57.791494   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:57.815197   54219 cri.go:89] found id: ""
	I1212 19:56:57.815211   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.815218   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:57.815223   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:57.815291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:57.839238   54219 cri.go:89] found id: ""
	I1212 19:56:57.839251   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.839258   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:57.839265   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:57.839275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:57.895387   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:57.895408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:57.906723   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:57.906738   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:57.970462   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:57.970473   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:57.970483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:58.035426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:58.035459   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.567794   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:00.577750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:00.577811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:00.601472   54219 cri.go:89] found id: ""
	I1212 19:57:00.601485   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.601492   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:00.601497   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:00.601552   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:00.624990   54219 cri.go:89] found id: ""
	I1212 19:57:00.625003   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.625009   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:00.625014   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:00.625069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:00.652831   54219 cri.go:89] found id: ""
	I1212 19:57:00.652845   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.652852   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:00.652857   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:00.652913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:00.676463   54219 cri.go:89] found id: ""
	I1212 19:57:00.676477   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.676484   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:00.676489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:00.676544   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:00.700820   54219 cri.go:89] found id: ""
	I1212 19:57:00.700833   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.700840   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:00.700845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:00.700904   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:00.728048   54219 cri.go:89] found id: ""
	I1212 19:57:00.728061   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.728068   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:00.728073   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:00.728129   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:00.754114   54219 cri.go:89] found id: ""
	I1212 19:57:00.754127   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.754134   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:00.754142   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:00.754152   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.783733   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:00.783749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:00.842004   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:00.842021   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:00.852440   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:00.852455   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:00.914781   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:00.914792   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:00.914802   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.477311   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:03.488847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:03.488902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:03.517173   54219 cri.go:89] found id: ""
	I1212 19:57:03.517186   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.517194   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:03.517198   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:03.517266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:03.545723   54219 cri.go:89] found id: ""
	I1212 19:57:03.545737   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.545750   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:03.545755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:03.545812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:03.572600   54219 cri.go:89] found id: ""
	I1212 19:57:03.572614   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.572622   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:03.572626   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:03.572688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:03.597001   54219 cri.go:89] found id: ""
	I1212 19:57:03.597015   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.597026   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:03.597031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:03.597088   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:03.625021   54219 cri.go:89] found id: ""
	I1212 19:57:03.625034   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.625041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:03.625046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:03.625104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:03.653842   54219 cri.go:89] found id: ""
	I1212 19:57:03.653856   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.653864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:03.653869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:03.653926   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:03.677783   54219 cri.go:89] found id: ""
	I1212 19:57:03.677797   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.677804   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:03.677812   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:03.677822   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:03.736594   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:03.736617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:03.747247   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:03.747264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:03.809956   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:03.809965   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:03.809987   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.871011   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:03.871029   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.399328   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:06.409365   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:06.409423   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:06.433061   54219 cri.go:89] found id: ""
	I1212 19:57:06.433075   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.433082   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:06.433094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:06.433154   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:06.481872   54219 cri.go:89] found id: ""
	I1212 19:57:06.481886   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.481893   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:06.481898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:06.481954   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:06.510179   54219 cri.go:89] found id: ""
	I1212 19:57:06.510192   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.510200   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:06.510204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:06.510264   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:06.543022   54219 cri.go:89] found id: ""
	I1212 19:57:06.543036   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.543043   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:06.543048   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:06.543104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:06.570071   54219 cri.go:89] found id: ""
	I1212 19:57:06.570091   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.570100   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:06.570105   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:06.570170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:06.599741   54219 cri.go:89] found id: ""
	I1212 19:57:06.599754   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.599761   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:06.599779   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:06.599858   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:06.624514   54219 cri.go:89] found id: ""
	I1212 19:57:06.624528   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.624534   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:06.624542   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:06.624553   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:06.635592   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:06.635610   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:06.702713   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:06.702724   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:06.702734   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:06.765240   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:06.765258   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.793023   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:06.793039   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:09.351721   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:09.361738   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:09.361798   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:09.386854   54219 cri.go:89] found id: ""
	I1212 19:57:09.386867   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.386875   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:09.386880   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:09.386944   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:09.412114   54219 cri.go:89] found id: ""
	I1212 19:57:09.412127   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.412134   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:09.412139   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:09.412197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:09.449831   54219 cri.go:89] found id: ""
	I1212 19:57:09.449844   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.449854   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:09.449859   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:09.449913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:09.478096   54219 cri.go:89] found id: ""
	I1212 19:57:09.478109   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.478127   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:09.478133   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:09.478205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:09.509051   54219 cri.go:89] found id: ""
	I1212 19:57:09.509064   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.509072   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:09.509077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:09.509140   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:09.533239   54219 cri.go:89] found id: ""
	I1212 19:57:09.533253   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.533259   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:09.533265   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:09.533320   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:09.559093   54219 cri.go:89] found id: ""
	I1212 19:57:09.559108   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.559114   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:09.559122   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:09.559144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:09.569994   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:09.570010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:09.632936   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:09.632947   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:09.632957   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:09.694797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:09.694815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:09.723095   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:09.723124   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.279206   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:12.289157   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:12.289218   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:12.314051   54219 cri.go:89] found id: ""
	I1212 19:57:12.314065   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.314071   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:12.314077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:12.314146   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:12.338981   54219 cri.go:89] found id: ""
	I1212 19:57:12.338995   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.339002   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:12.339007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:12.339064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:12.364272   54219 cri.go:89] found id: ""
	I1212 19:57:12.364285   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.364294   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:12.364299   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:12.364356   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:12.388633   54219 cri.go:89] found id: ""
	I1212 19:57:12.388647   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.388654   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:12.388659   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:12.388717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:12.412315   54219 cri.go:89] found id: ""
	I1212 19:57:12.412330   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.412337   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:12.412342   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:12.412399   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:12.435919   54219 cri.go:89] found id: ""
	I1212 19:57:12.435932   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.435938   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:12.435944   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:12.436010   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:12.464586   54219 cri.go:89] found id: ""
	I1212 19:57:12.464600   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.464607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:12.464615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:12.464625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.531126   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:12.531144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:12.541720   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:12.541737   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:12.607440   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:12.607450   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:12.607460   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:12.669638   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:12.669657   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.197082   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:15.207136   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:15.207197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:15.232075   54219 cri.go:89] found id: ""
	I1212 19:57:15.232089   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.232095   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:15.232101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:15.232159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:15.256640   54219 cri.go:89] found id: ""
	I1212 19:57:15.256654   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.256661   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:15.256668   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:15.256725   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:15.281708   54219 cri.go:89] found id: ""
	I1212 19:57:15.281722   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.281729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:15.281751   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:15.281811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:15.306602   54219 cri.go:89] found id: ""
	I1212 19:57:15.306615   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.306622   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:15.306627   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:15.306683   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:15.330704   54219 cri.go:89] found id: ""
	I1212 19:57:15.330718   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.330724   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:15.330730   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:15.330788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:15.356237   54219 cri.go:89] found id: ""
	I1212 19:57:15.356251   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.356258   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:15.356263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:15.356322   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:15.384137   54219 cri.go:89] found id: ""
	I1212 19:57:15.384149   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.384155   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:15.384163   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:15.384174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:15.394815   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:15.394831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:15.464384   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:15.464402   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:15.464413   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:15.531093   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:15.531112   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.558272   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:15.558287   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.114881   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:18.124888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:18.124947   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:18.153733   54219 cri.go:89] found id: ""
	I1212 19:57:18.153747   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.153753   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:18.153758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:18.153819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:18.179987   54219 cri.go:89] found id: ""
	I1212 19:57:18.180001   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.180007   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:18.180012   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:18.180069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:18.208210   54219 cri.go:89] found id: ""
	I1212 19:57:18.208223   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.208230   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:18.208235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:18.208290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:18.240237   54219 cri.go:89] found id: ""
	I1212 19:57:18.240252   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.240258   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:18.240263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:18.240321   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:18.263335   54219 cri.go:89] found id: ""
	I1212 19:57:18.263349   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.263356   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:18.263361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:18.263416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:18.286920   54219 cri.go:89] found id: ""
	I1212 19:57:18.286933   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.286940   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:18.286945   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:18.286999   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:18.311040   54219 cri.go:89] found id: ""
	I1212 19:57:18.311053   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.311060   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:18.311068   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:18.311077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.366520   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:18.366538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:18.376885   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:18.376903   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:18.439989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:18.440010   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:18.440020   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:18.511364   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:18.511384   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:21.043380   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:21.053290   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:21.053345   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:21.077334   54219 cri.go:89] found id: ""
	I1212 19:57:21.077348   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.077355   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:21.077360   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:21.077424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:21.102108   54219 cri.go:89] found id: ""
	I1212 19:57:21.102122   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.102129   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:21.102141   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:21.102198   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:21.125941   54219 cri.go:89] found id: ""
	I1212 19:57:21.125955   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.125962   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:21.125967   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:21.126022   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:21.150198   54219 cri.go:89] found id: ""
	I1212 19:57:21.150211   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.150218   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:21.150229   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:21.150284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:21.177722   54219 cri.go:89] found id: ""
	I1212 19:57:21.177736   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.177743   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:21.177748   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:21.177806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:21.205490   54219 cri.go:89] found id: ""
	I1212 19:57:21.205504   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.205511   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:21.205516   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:21.205574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:21.230104   54219 cri.go:89] found id: ""
	I1212 19:57:21.230118   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.230125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:21.230132   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:21.230148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:21.286638   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:21.286655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:21.297043   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:21.297058   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:21.358837   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:21.358847   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:21.358858   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:21.425656   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:21.425676   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:23.965162   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:23.974936   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:23.975001   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:23.998921   54219 cri.go:89] found id: ""
	I1212 19:57:23.998935   54219 logs.go:282] 0 containers: []
	W1212 19:57:23.998942   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:23.998947   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:23.999007   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:24.028254   54219 cri.go:89] found id: ""
	I1212 19:57:24.028283   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.028291   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:24.028296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:24.028365   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:24.053461   54219 cri.go:89] found id: ""
	I1212 19:57:24.053475   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.053482   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:24.053487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:24.053546   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:24.082160   54219 cri.go:89] found id: ""
	I1212 19:57:24.082175   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.082182   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:24.082187   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:24.082247   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:24.111368   54219 cri.go:89] found id: ""
	I1212 19:57:24.111381   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.111388   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:24.111394   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:24.111452   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:24.139886   54219 cri.go:89] found id: ""
	I1212 19:57:24.139900   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.139907   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:24.139912   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:24.139966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:24.165622   54219 cri.go:89] found id: ""
	I1212 19:57:24.165636   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.165644   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:24.165652   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:24.165661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:24.223024   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:24.223042   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:24.234034   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:24.234049   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:24.300286   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:24.300298   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:24.300308   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:24.366297   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:24.366324   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:26.892882   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:26.903710   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:26.903767   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:26.928734   54219 cri.go:89] found id: ""
	I1212 19:57:26.928748   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.928754   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:26.928759   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:26.928815   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:26.951741   54219 cri.go:89] found id: ""
	I1212 19:57:26.951754   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.951760   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:26.951765   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:26.951820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:26.977319   54219 cri.go:89] found id: ""
	I1212 19:57:26.977332   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.977339   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:26.977343   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:26.977396   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:27.005917   54219 cri.go:89] found id: ""
	I1212 19:57:27.005931   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.005937   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:27.005942   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:27.005997   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:27.031546   54219 cri.go:89] found id: ""
	I1212 19:57:27.031561   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.031568   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:27.031573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:27.031630   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:27.055510   54219 cri.go:89] found id: ""
	I1212 19:57:27.055524   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.055530   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:27.055535   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:27.055593   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:27.083350   54219 cri.go:89] found id: ""
	I1212 19:57:27.083364   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.083370   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:27.083389   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:27.083400   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:27.111521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:27.111542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:27.166541   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:27.166558   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:27.177159   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:27.177174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:27.242522   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:27.242532   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:27.242542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:29.804626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:29.814577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:29.814643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:29.840378   54219 cri.go:89] found id: ""
	I1212 19:57:29.840391   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.840398   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:29.840403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:29.840462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:29.868144   54219 cri.go:89] found id: ""
	I1212 19:57:29.868157   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.868163   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:29.868168   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:29.868227   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:29.893720   54219 cri.go:89] found id: ""
	I1212 19:57:29.893734   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.893740   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:29.893745   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:29.893812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:29.922305   54219 cri.go:89] found id: ""
	I1212 19:57:29.922319   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.922326   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:29.922331   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:29.922386   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:29.946347   54219 cri.go:89] found id: ""
	I1212 19:57:29.946366   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.946373   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:29.946378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:29.946434   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:29.971074   54219 cri.go:89] found id: ""
	I1212 19:57:29.971087   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.971094   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:29.971099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:29.971158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:29.994674   54219 cri.go:89] found id: ""
	I1212 19:57:29.994697   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.994704   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:29.994712   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:29.994723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:30.005086   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:30.005108   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:30.083562   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:30.083572   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:30.083582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:30.146070   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:30.146089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:30.178521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:30.178538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:32.735968   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:32.746704   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:32.746766   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:32.773559   54219 cri.go:89] found id: ""
	I1212 19:57:32.773573   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.773579   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:32.773584   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:32.773647   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:32.796720   54219 cri.go:89] found id: ""
	I1212 19:57:32.796733   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.796749   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:32.796755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:32.796809   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:32.819740   54219 cri.go:89] found id: ""
	I1212 19:57:32.819754   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.819761   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:32.819766   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:32.819824   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:32.845383   54219 cri.go:89] found id: ""
	I1212 19:57:32.845396   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.845404   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:32.845409   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:32.845463   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:32.868404   54219 cri.go:89] found id: ""
	I1212 19:57:32.868417   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.868423   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:32.868428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:32.868482   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:32.893264   54219 cri.go:89] found id: ""
	I1212 19:57:32.893278   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.893284   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:32.893289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:32.893342   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:32.918080   54219 cri.go:89] found id: ""
	I1212 19:57:32.918103   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.918111   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:32.918124   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:32.918134   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:32.983660   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:32.983671   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:32.983682   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:33.050130   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:33.050155   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:33.077660   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:33.077675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:33.136010   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:33.136028   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.647123   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:35.656832   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:35.656887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:35.680780   54219 cri.go:89] found id: ""
	I1212 19:57:35.680793   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.680800   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:35.680805   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:35.680863   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:35.710149   54219 cri.go:89] found id: ""
	I1212 19:57:35.710163   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.710171   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:35.710175   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:35.710233   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:35.737709   54219 cri.go:89] found id: ""
	I1212 19:57:35.737722   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.737729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:35.737734   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:35.737788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:35.763960   54219 cri.go:89] found id: ""
	I1212 19:57:35.763974   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.763986   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:35.763991   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:35.764053   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:35.796697   54219 cri.go:89] found id: ""
	I1212 19:57:35.796710   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.796718   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:35.796722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:35.796782   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:35.820208   54219 cri.go:89] found id: ""
	I1212 19:57:35.820222   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.820229   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:35.820234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:35.820289   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:35.845107   54219 cri.go:89] found id: ""
	I1212 19:57:35.845121   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.845128   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:35.845135   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:35.845148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:35.904798   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:35.904816   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.915282   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:35.915297   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:35.980125   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:35.980135   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:35.980146   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:36.042456   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:36.042476   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:38.571541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:38.581597   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:38.581658   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:38.604774   54219 cri.go:89] found id: ""
	I1212 19:57:38.604787   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.604794   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:38.604799   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:38.604853   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:38.630065   54219 cri.go:89] found id: ""
	I1212 19:57:38.630079   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.630085   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:38.630090   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:38.630151   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:38.654890   54219 cri.go:89] found id: ""
	I1212 19:57:38.654903   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.654910   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:38.654915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:38.654970   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:38.682669   54219 cri.go:89] found id: ""
	I1212 19:57:38.682684   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.682691   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:38.682696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:38.682753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:38.728209   54219 cri.go:89] found id: ""
	I1212 19:57:38.728227   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.728244   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:38.728249   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:38.728317   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:38.757740   54219 cri.go:89] found id: ""
	I1212 19:57:38.757753   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.757768   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:38.757774   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:38.757829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:38.785300   54219 cri.go:89] found id: ""
	I1212 19:57:38.785314   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.785321   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:38.785328   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:38.785338   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:38.841797   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:38.841815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:38.852807   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:38.852823   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:38.918575   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:38.918585   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:38.918596   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:38.980647   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:38.980666   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:41.508125   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:41.518560   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:41.518620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:41.543483   54219 cri.go:89] found id: ""
	I1212 19:57:41.543497   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.543504   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:41.543509   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:41.543565   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:41.568460   54219 cri.go:89] found id: ""
	I1212 19:57:41.568474   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.568481   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:41.568485   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:41.568541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:41.592454   54219 cri.go:89] found id: ""
	I1212 19:57:41.592468   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.592475   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:41.592480   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:41.592537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:41.616514   54219 cri.go:89] found id: ""
	I1212 19:57:41.616528   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.616535   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:41.616540   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:41.616600   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:41.640661   54219 cri.go:89] found id: ""
	I1212 19:57:41.640675   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.640681   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:41.640686   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:41.640741   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:41.668228   54219 cri.go:89] found id: ""
	I1212 19:57:41.668241   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.668248   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:41.668254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:41.668315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:41.694010   54219 cri.go:89] found id: ""
	I1212 19:57:41.694023   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.694030   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:41.694048   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:41.694057   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:41.759133   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:41.759153   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:41.770184   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:41.770200   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:41.834777   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:41.834788   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:41.834798   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:41.896691   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:41.896709   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:44.424748   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:44.434763   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:44.434819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:44.458808   54219 cri.go:89] found id: ""
	I1212 19:57:44.458821   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.458833   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:44.458839   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:44.458895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:44.484932   54219 cri.go:89] found id: ""
	I1212 19:57:44.484945   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.484951   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:44.484956   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:44.485013   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:44.509964   54219 cri.go:89] found id: ""
	I1212 19:57:44.509978   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.509985   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:44.509990   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:44.510047   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:44.538212   54219 cri.go:89] found id: ""
	I1212 19:57:44.538226   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.538233   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:44.538239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:44.538295   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:44.563029   54219 cri.go:89] found id: ""
	I1212 19:57:44.563043   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.563050   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:44.563058   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:44.563116   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:44.594560   54219 cri.go:89] found id: ""
	I1212 19:57:44.594573   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.594580   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:44.594585   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:44.594648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:44.618882   54219 cri.go:89] found id: ""
	I1212 19:57:44.618896   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.618903   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:44.618910   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:44.618921   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:44.674635   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:44.674653   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:44.685377   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:44.685392   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:44.767577   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:44.767587   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:44.767599   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:44.830883   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:44.830901   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:47.361584   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:47.371608   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:47.371664   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:47.397902   54219 cri.go:89] found id: ""
	I1212 19:57:47.397915   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.397922   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:47.397927   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:47.397983   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:47.421839   54219 cri.go:89] found id: ""
	I1212 19:57:47.421852   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.421859   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:47.421864   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:47.421920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:47.444814   54219 cri.go:89] found id: ""
	I1212 19:57:47.444829   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.444836   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:47.444841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:47.444895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:47.470743   54219 cri.go:89] found id: ""
	I1212 19:57:47.470758   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.470765   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:47.470770   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:47.470829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:47.494189   54219 cri.go:89] found id: ""
	I1212 19:57:47.494202   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.494209   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:47.494214   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:47.494271   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:47.522490   54219 cri.go:89] found id: ""
	I1212 19:57:47.522504   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.522510   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:47.522515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:47.522573   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:47.546914   54219 cri.go:89] found id: ""
	I1212 19:57:47.546929   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.546938   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:47.546948   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:47.546960   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:47.602569   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:47.602586   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:47.613063   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:47.613077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:47.675404   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:47.675413   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:47.675424   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:47.744526   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:47.744545   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:50.275957   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:50.285985   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:50.286042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:50.310834   54219 cri.go:89] found id: ""
	I1212 19:57:50.310848   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.310855   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:50.310860   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:50.310915   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:50.335949   54219 cri.go:89] found id: ""
	I1212 19:57:50.335962   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.335969   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:50.335973   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:50.336042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:50.361218   54219 cri.go:89] found id: ""
	I1212 19:57:50.361233   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.361239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:50.361244   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:50.361302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:50.389990   54219 cri.go:89] found id: ""
	I1212 19:57:50.390004   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.390011   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:50.390016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:50.390070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:50.414872   54219 cri.go:89] found id: ""
	I1212 19:57:50.414886   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.414893   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:50.414898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:50.414957   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:50.439081   54219 cri.go:89] found id: ""
	I1212 19:57:50.439094   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.439102   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:50.439106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:50.439162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:50.463124   54219 cri.go:89] found id: ""
	I1212 19:57:50.463137   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.463144   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:50.463151   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:50.463160   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:50.519197   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:50.519217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:50.529678   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:50.529697   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:50.593926   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:50.593936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:50.593946   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:50.663627   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:50.663647   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.195155   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:53.205007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:53.205065   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:53.228910   54219 cri.go:89] found id: ""
	I1212 19:57:53.228924   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.228930   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:53.228935   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:53.228992   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:53.256269   54219 cri.go:89] found id: ""
	I1212 19:57:53.256282   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.256289   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:53.256294   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:53.256363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:53.279490   54219 cri.go:89] found id: ""
	I1212 19:57:53.279505   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.279512   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:53.279517   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:53.279575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:53.303201   54219 cri.go:89] found id: ""
	I1212 19:57:53.303215   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.303222   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:53.303227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:53.303285   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:53.331320   54219 cri.go:89] found id: ""
	I1212 19:57:53.331333   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.331349   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:53.331354   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:53.331424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:53.355603   54219 cri.go:89] found id: ""
	I1212 19:57:53.355617   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.355624   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:53.355629   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:53.355685   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:53.380364   54219 cri.go:89] found id: ""
	I1212 19:57:53.380378   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.380385   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:53.380394   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:53.380405   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:53.448989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:53.449000   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:53.449010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:53.516879   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:53.516908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.550642   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:53.550661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:53.608676   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:53.608694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.120012   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:56.129790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:56.129852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:56.154949   54219 cri.go:89] found id: ""
	I1212 19:57:56.154963   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.154969   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:56.154974   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:56.155029   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:56.178218   54219 cri.go:89] found id: ""
	I1212 19:57:56.178232   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.178240   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:56.178254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:56.178311   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:56.202037   54219 cri.go:89] found id: ""
	I1212 19:57:56.202053   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.202060   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:56.202065   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:56.202127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:56.226077   54219 cri.go:89] found id: ""
	I1212 19:57:56.226106   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.226114   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:56.226120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:56.226183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:56.249790   54219 cri.go:89] found id: ""
	I1212 19:57:56.249803   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.249810   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:56.249815   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:56.249868   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:56.273767   54219 cri.go:89] found id: ""
	I1212 19:57:56.273780   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.273787   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:56.273793   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:56.273851   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:56.301574   54219 cri.go:89] found id: ""
	I1212 19:57:56.301587   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.301594   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:56.301602   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:56.301612   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:56.362705   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:56.362723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.373142   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:56.373166   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:56.434197   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:56.434207   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:56.434217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:56.497280   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:56.497298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:59.029935   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:59.040115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:59.040173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:59.064443   54219 cri.go:89] found id: ""
	I1212 19:57:59.064458   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.064465   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:59.064470   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:59.064525   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:59.089160   54219 cri.go:89] found id: ""
	I1212 19:57:59.089173   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.089180   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:59.089185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:59.089250   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:59.113771   54219 cri.go:89] found id: ""
	I1212 19:57:59.113785   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.113792   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:59.113797   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:59.113852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:59.141148   54219 cri.go:89] found id: ""
	I1212 19:57:59.141162   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.141169   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:59.141174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:59.141241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:59.163991   54219 cri.go:89] found id: ""
	I1212 19:57:59.164005   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.164011   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:59.164016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:59.164076   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:59.189011   54219 cri.go:89] found id: ""
	I1212 19:57:59.189026   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.189033   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:59.189038   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:59.189092   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:59.213106   54219 cri.go:89] found id: ""
	I1212 19:57:59.213119   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.213125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:59.213133   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:59.213143   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:59.268036   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:59.268054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:59.278468   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:59.278483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:59.343881   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:59.343891   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:59.343909   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:59.406439   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:59.406457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:01.935967   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:01.947272   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:01.947331   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:01.980222   54219 cri.go:89] found id: ""
	I1212 19:58:01.980235   54219 logs.go:282] 0 containers: []
	W1212 19:58:01.980251   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:01.980257   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:01.980314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:02.009777   54219 cri.go:89] found id: ""
	I1212 19:58:02.009794   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.009802   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:02.009808   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:02.009899   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:02.042576   54219 cri.go:89] found id: ""
	I1212 19:58:02.042591   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.042598   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:02.042603   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:02.042680   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:02.067370   54219 cri.go:89] found id: ""
	I1212 19:58:02.067384   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.067392   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:02.067397   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:02.067462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:02.096410   54219 cri.go:89] found id: ""
	I1212 19:58:02.096423   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.096430   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:02.096436   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:02.096495   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:02.120186   54219 cri.go:89] found id: ""
	I1212 19:58:02.120200   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.120207   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:02.120212   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:02.120272   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:02.146219   54219 cri.go:89] found id: ""
	I1212 19:58:02.146233   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.146240   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:02.146264   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:02.146274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:02.203137   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:02.203156   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:02.214269   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:02.214290   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:02.282468   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:02.282477   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:02.282490   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:02.345078   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:02.345096   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:04.874398   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:04.884418   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:04.884477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:04.913524   54219 cri.go:89] found id: ""
	I1212 19:58:04.913537   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.913544   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:04.913596   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:04.913656   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:04.941905   54219 cri.go:89] found id: ""
	I1212 19:58:04.941919   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.941925   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:04.941930   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:04.941988   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:04.969529   54219 cri.go:89] found id: ""
	I1212 19:58:04.969549   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.969556   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:04.969561   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:04.969619   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:04.998159   54219 cri.go:89] found id: ""
	I1212 19:58:04.998173   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.998180   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:04.998185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:04.998241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:05.027027   54219 cri.go:89] found id: ""
	I1212 19:58:05.027042   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.027052   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:05.027057   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:05.027159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:05.053821   54219 cri.go:89] found id: ""
	I1212 19:58:05.053834   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.053841   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:05.053847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:05.053903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:05.078817   54219 cri.go:89] found id: ""
	I1212 19:58:05.078831   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.078837   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:05.078845   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:05.078856   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:05.137908   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:05.137927   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:05.149843   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:05.149859   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:05.216435   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:05.216444   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:05.216454   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:05.281451   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:05.281469   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:07.809177   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:07.819079   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:07.819135   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:07.843677   54219 cri.go:89] found id: ""
	I1212 19:58:07.843691   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.843698   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:07.843703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:07.843763   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:07.873172   54219 cri.go:89] found id: ""
	I1212 19:58:07.873185   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.873192   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:07.873197   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:07.873251   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:07.898060   54219 cri.go:89] found id: ""
	I1212 19:58:07.898082   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.898090   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:07.898099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:07.898157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:07.922099   54219 cri.go:89] found id: ""
	I1212 19:58:07.922113   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.922120   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:07.922131   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:07.922186   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:07.951267   54219 cri.go:89] found id: ""
	I1212 19:58:07.951281   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.951287   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:07.951292   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:07.951350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:07.979301   54219 cri.go:89] found id: ""
	I1212 19:58:07.979315   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.979322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:07.979327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:07.979383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:08.016405   54219 cri.go:89] found id: ""
	I1212 19:58:08.016418   54219 logs.go:282] 0 containers: []
	W1212 19:58:08.016425   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:08.016433   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:08.016444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:08.027858   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:08.027875   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:08.095861   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:08.095872   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:08.095885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:08.159001   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:08.159019   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:08.186794   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:08.186812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.744419   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:10.755144   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:10.755202   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:10.778581   54219 cri.go:89] found id: ""
	I1212 19:58:10.778594   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.778601   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:10.778607   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:10.778663   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:10.802768   54219 cri.go:89] found id: ""
	I1212 19:58:10.802781   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.802787   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:10.802792   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:10.802850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:10.828295   54219 cri.go:89] found id: ""
	I1212 19:58:10.828309   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.828316   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:10.828321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:10.828374   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:10.851350   54219 cri.go:89] found id: ""
	I1212 19:58:10.851363   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.851370   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:10.851375   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:10.851429   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:10.879621   54219 cri.go:89] found id: ""
	I1212 19:58:10.879635   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.879641   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:10.879646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:10.879700   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:10.905108   54219 cri.go:89] found id: ""
	I1212 19:58:10.905122   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.905129   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:10.905134   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:10.905191   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:10.928365   54219 cri.go:89] found id: ""
	I1212 19:58:10.928379   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.928386   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:10.928394   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:10.928418   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.986372   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:10.986390   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:10.997450   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:10.997464   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:11.067488   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:11.067499   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:11.067510   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:11.131069   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:11.131089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:13.660595   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:13.670703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:13.670762   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:13.694211   54219 cri.go:89] found id: ""
	I1212 19:58:13.694224   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.694231   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:13.694236   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:13.694291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:13.724541   54219 cri.go:89] found id: ""
	I1212 19:58:13.724554   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.724561   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:13.724566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:13.724625   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:13.750194   54219 cri.go:89] found id: ""
	I1212 19:58:13.750207   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.750214   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:13.750219   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:13.750277   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:13.774257   54219 cri.go:89] found id: ""
	I1212 19:58:13.774271   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.774278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:13.774283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:13.774338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:13.799078   54219 cri.go:89] found id: ""
	I1212 19:58:13.799091   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.799097   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:13.799102   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:13.799158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:13.822710   54219 cri.go:89] found id: ""
	I1212 19:58:13.822724   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.822730   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:13.822735   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:13.822791   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:13.849556   54219 cri.go:89] found id: ""
	I1212 19:58:13.849570   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.849576   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:13.849584   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:13.849595   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:13.907383   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:13.907403   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:13.917866   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:13.917883   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:14.000449   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:14.000458   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:14.000477   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:14.066367   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:14.066386   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:16.594682   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:16.604845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:16.604903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:16.629471   54219 cri.go:89] found id: ""
	I1212 19:58:16.629485   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.629493   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:16.629498   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:16.629554   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:16.654890   54219 cri.go:89] found id: ""
	I1212 19:58:16.654904   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.654911   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:16.654916   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:16.654981   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:16.679283   54219 cri.go:89] found id: ""
	I1212 19:58:16.679297   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.679304   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:16.679309   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:16.679362   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:16.704043   54219 cri.go:89] found id: ""
	I1212 19:58:16.704057   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.704065   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:16.704070   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:16.704127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:16.728139   54219 cri.go:89] found id: ""
	I1212 19:58:16.728153   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.728159   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:16.728164   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:16.728225   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:16.757814   54219 cri.go:89] found id: ""
	I1212 19:58:16.757829   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.757836   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:16.757841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:16.757894   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:16.782420   54219 cri.go:89] found id: ""
	I1212 19:58:16.782433   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.782441   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:16.782448   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:16.782458   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:16.841763   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:16.841780   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:16.852845   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:16.852861   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:16.920551   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:16.920561   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:16.920572   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:16.986769   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:16.986788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:19.527987   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:19.537931   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:19.537994   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:19.561363   54219 cri.go:89] found id: ""
	I1212 19:58:19.561377   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.561383   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:19.561389   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:19.561444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:19.584696   54219 cri.go:89] found id: ""
	I1212 19:58:19.584710   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.584717   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:19.584722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:19.584783   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:19.608796   54219 cri.go:89] found id: ""
	I1212 19:58:19.608816   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.608829   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:19.608834   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:19.608888   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:19.633676   54219 cri.go:89] found id: ""
	I1212 19:58:19.633690   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.633697   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:19.633702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:19.633765   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:19.656537   54219 cri.go:89] found id: ""
	I1212 19:58:19.656550   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.656557   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:19.656562   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:19.656615   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:19.681676   54219 cri.go:89] found id: ""
	I1212 19:58:19.681689   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.681696   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:19.681701   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:19.681756   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:19.704747   54219 cri.go:89] found id: ""
	I1212 19:58:19.704761   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.704768   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:19.704775   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:19.704785   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:19.760344   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:19.760360   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:19.770729   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:19.770745   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:19.834442   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:19.834452   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:19.834462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:19.897417   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:19.897437   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:22.424308   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:22.434481   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:22.434537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:22.458765   54219 cri.go:89] found id: ""
	I1212 19:58:22.458778   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.458785   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:22.458790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:22.458844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:22.486364   54219 cri.go:89] found id: ""
	I1212 19:58:22.486378   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.486385   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:22.486403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:22.486469   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:22.518554   54219 cri.go:89] found id: ""
	I1212 19:58:22.518567   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.518575   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:22.518579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:22.518648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:22.543164   54219 cri.go:89] found id: ""
	I1212 19:58:22.543178   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.543185   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:22.543190   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:22.543266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:22.567677   54219 cri.go:89] found id: ""
	I1212 19:58:22.567691   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.567697   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:22.567702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:22.567757   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:22.594216   54219 cri.go:89] found id: ""
	I1212 19:58:22.594230   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.594237   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:22.594242   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:22.594310   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:22.622007   54219 cri.go:89] found id: ""
	I1212 19:58:22.622021   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.622028   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:22.622036   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:22.622046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:22.684696   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:22.684719   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:22.696409   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:22.696425   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:22.763719   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:22.763730   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:22.763742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:22.828220   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:22.828242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.355355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:25.367957   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:25.368041   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:25.394847   54219 cri.go:89] found id: ""
	I1212 19:58:25.394861   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.394868   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:25.394873   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:25.394928   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:25.419394   54219 cri.go:89] found id: ""
	I1212 19:58:25.419408   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.419414   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:25.419419   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:25.419477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:25.444373   54219 cri.go:89] found id: ""
	I1212 19:58:25.444386   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.444393   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:25.444398   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:25.444455   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:25.467872   54219 cri.go:89] found id: ""
	I1212 19:58:25.467886   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.467892   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:25.467897   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:25.467952   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:25.491493   54219 cri.go:89] found id: ""
	I1212 19:58:25.491507   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.491514   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:25.491519   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:25.491575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:25.515809   54219 cri.go:89] found id: ""
	I1212 19:58:25.515832   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.515864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:25.515869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:25.515939   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:25.540733   54219 cri.go:89] found id: ""
	I1212 19:58:25.540747   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.540754   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:25.540762   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:25.540773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:25.551372   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:25.551387   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:25.613099   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:25.613109   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:25.613119   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:25.674835   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:25.674854   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.702894   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:25.702910   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.260731   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:28.270423   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:28.270480   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:28.297804   54219 cri.go:89] found id: ""
	I1212 19:58:28.297818   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.297825   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:28.297830   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:28.297887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:28.322143   54219 cri.go:89] found id: ""
	I1212 19:58:28.322157   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.322164   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:28.322169   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:28.322223   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:28.346215   54219 cri.go:89] found id: ""
	I1212 19:58:28.346229   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.346236   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:28.346241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:28.346297   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:28.370542   54219 cri.go:89] found id: ""
	I1212 19:58:28.370556   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.370563   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:28.370574   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:28.370634   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:28.397655   54219 cri.go:89] found id: ""
	I1212 19:58:28.397670   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.397677   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:28.397682   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:28.397737   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:28.421548   54219 cri.go:89] found id: ""
	I1212 19:58:28.421561   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.421568   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:28.421573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:28.421627   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:28.445812   54219 cri.go:89] found id: ""
	I1212 19:58:28.445826   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.445833   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:28.445840   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:28.445850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.501608   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:28.501625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:28.513441   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:28.513494   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:28.582207   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:28.582217   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:28.582229   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:28.644833   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:28.644850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:31.174256   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:31.184503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:31.184561   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:31.220119   54219 cri.go:89] found id: ""
	I1212 19:58:31.220139   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.220147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:31.220158   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:31.220226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:31.253789   54219 cri.go:89] found id: ""
	I1212 19:58:31.253802   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.253815   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:31.253825   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:31.253884   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:31.279880   54219 cri.go:89] found id: ""
	I1212 19:58:31.279899   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.279906   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:31.279911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:31.279965   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:31.304491   54219 cri.go:89] found id: ""
	I1212 19:58:31.304504   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.304511   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:31.304515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:31.304569   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:31.331430   54219 cri.go:89] found id: ""
	I1212 19:58:31.331444   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.331451   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:31.331456   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:31.331510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:31.357552   54219 cri.go:89] found id: ""
	I1212 19:58:31.357566   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.357572   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:31.357577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:31.357633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:31.381902   54219 cri.go:89] found id: ""
	I1212 19:58:31.381916   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.381923   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:31.381930   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:31.381940   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:31.437813   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:31.437831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:31.448492   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:31.448509   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:31.513035   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:31.513045   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:31.513056   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:31.574565   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:31.574584   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:34.102253   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:34.112554   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:34.112620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:34.137461   54219 cri.go:89] found id: ""
	I1212 19:58:34.137475   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.137482   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:34.137487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:34.137541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:34.166138   54219 cri.go:89] found id: ""
	I1212 19:58:34.166161   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.166169   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:34.166174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:34.166234   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:34.193829   54219 cri.go:89] found id: ""
	I1212 19:58:34.193842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.193849   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:34.193854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:34.193906   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:34.231695   54219 cri.go:89] found id: ""
	I1212 19:58:34.231708   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.231716   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:34.231721   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:34.231777   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:34.264331   54219 cri.go:89] found id: ""
	I1212 19:58:34.264344   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.264351   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:34.264356   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:34.264412   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:34.288829   54219 cri.go:89] found id: ""
	I1212 19:58:34.288842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.288849   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:34.288854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:34.288908   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:34.316442   54219 cri.go:89] found id: ""
	I1212 19:58:34.316456   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.316463   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:34.316471   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:34.316481   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:34.376058   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:34.376076   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:34.386998   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:34.387013   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:34.452379   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:34.452390   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:34.452401   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:34.514653   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:34.514671   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:37.042798   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:37.053097   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:37.053156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:37.076591   54219 cri.go:89] found id: ""
	I1212 19:58:37.076604   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.076611   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:37.076616   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:37.076674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:37.099322   54219 cri.go:89] found id: ""
	I1212 19:58:37.099335   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.099342   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:37.099348   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:37.099402   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:37.123234   54219 cri.go:89] found id: ""
	I1212 19:58:37.123248   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.123255   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:37.123260   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:37.123314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:37.147746   54219 cri.go:89] found id: ""
	I1212 19:58:37.147760   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.147767   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:37.147772   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:37.147827   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:37.173059   54219 cri.go:89] found id: ""
	I1212 19:58:37.173072   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.173079   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:37.173084   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:37.173141   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:37.208173   54219 cri.go:89] found id: ""
	I1212 19:58:37.208192   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.208199   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:37.208204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:37.208263   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:37.239049   54219 cri.go:89] found id: ""
	I1212 19:58:37.239063   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.239070   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:37.239078   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:37.239088   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:37.297849   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:37.297866   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:37.309078   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:37.309092   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:37.375029   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:37.375038   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:37.375050   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:37.436797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:37.436815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:39.970179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:39.980227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:39.980293   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:40.004882   54219 cri.go:89] found id: ""
	I1212 19:58:40.004896   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.004903   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:40.004907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:40.004963   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:40.066617   54219 cri.go:89] found id: ""
	I1212 19:58:40.066632   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.066640   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:40.066645   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:40.066717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:40.102654   54219 cri.go:89] found id: ""
	I1212 19:58:40.102669   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.102676   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:40.102681   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:40.102745   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:40.133625   54219 cri.go:89] found id: ""
	I1212 19:58:40.133640   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.133648   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:40.133654   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:40.133723   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:40.166821   54219 cri.go:89] found id: ""
	I1212 19:58:40.166845   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.166853   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:40.166858   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:40.166927   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:40.195477   54219 cri.go:89] found id: ""
	I1212 19:58:40.195500   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.195509   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:40.195515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:40.195580   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:40.225935   54219 cri.go:89] found id: ""
	I1212 19:58:40.225949   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.225967   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:40.225976   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:40.225986   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:40.302829   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:40.302839   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:40.302850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:40.365532   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:40.365552   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:40.400282   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:40.400298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:40.460370   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:40.460389   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:42.971593   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:42.981866   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:42.981931   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:43.006660   54219 cri.go:89] found id: ""
	I1212 19:58:43.006674   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.006690   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:43.006696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:43.006753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:43.033557   54219 cri.go:89] found id: ""
	I1212 19:58:43.033571   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.033578   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:43.033583   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:43.033643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:43.062054   54219 cri.go:89] found id: ""
	I1212 19:58:43.062067   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.062073   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:43.062078   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:43.062139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:43.086826   54219 cri.go:89] found id: ""
	I1212 19:58:43.086841   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.086849   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:43.086854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:43.086920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:43.112001   54219 cri.go:89] found id: ""
	I1212 19:58:43.112015   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.112022   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:43.112027   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:43.112099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:43.137727   54219 cri.go:89] found id: ""
	I1212 19:58:43.137741   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.137748   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:43.137753   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:43.137811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:43.163693   54219 cri.go:89] found id: ""
	I1212 19:58:43.163707   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.163714   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:43.163731   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:43.163742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:43.174602   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:43.174617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:43.254196   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:43.254213   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:43.254224   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:43.321187   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:43.321206   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:43.353090   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:43.353105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:45.910450   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:45.920312   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:45.920373   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:45.942607   54219 cri.go:89] found id: ""
	I1212 19:58:45.942620   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.942627   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:45.942632   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:45.942688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:45.966155   54219 cri.go:89] found id: ""
	I1212 19:58:45.966168   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.966175   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:45.966179   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:45.966235   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:45.989218   54219 cri.go:89] found id: ""
	I1212 19:58:45.989232   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.989239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:45.989243   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:45.989298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:46.016207   54219 cri.go:89] found id: ""
	I1212 19:58:46.016222   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.016228   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:46.016234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:46.016291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:46.045554   54219 cri.go:89] found id: ""
	I1212 19:58:46.045569   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.045576   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:46.045581   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:46.045635   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:46.069843   54219 cri.go:89] found id: ""
	I1212 19:58:46.069856   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.069865   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:46.069870   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:46.069924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:46.093840   54219 cri.go:89] found id: ""
	I1212 19:58:46.093854   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.093860   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:46.093869   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:46.093878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:46.149331   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:46.149349   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:46.159907   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:46.159924   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:46.230481   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:46.230490   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:46.230502   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:46.300039   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:46.300060   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:48.829920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:48.840025   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:48.840080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:48.869553   54219 cri.go:89] found id: ""
	I1212 19:58:48.869567   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.869574   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:48.869579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:48.869633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:48.894185   54219 cri.go:89] found id: ""
	I1212 19:58:48.894199   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.894205   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:48.894220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:48.894280   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:48.918726   54219 cri.go:89] found id: ""
	I1212 19:58:48.918740   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.918752   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:48.918757   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:48.918814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:48.943092   54219 cri.go:89] found id: ""
	I1212 19:58:48.943106   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.943113   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:48.943118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:48.943172   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:48.967616   54219 cri.go:89] found id: ""
	I1212 19:58:48.967630   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.967637   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:48.967642   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:48.967697   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:48.993271   54219 cri.go:89] found id: ""
	I1212 19:58:48.993284   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.993291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:48.993296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:48.993355   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:49.018337   54219 cri.go:89] found id: ""
	I1212 19:58:49.018359   54219 logs.go:282] 0 containers: []
	W1212 19:58:49.018376   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:49.018386   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:49.018395   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:49.074620   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:49.074637   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:49.085360   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:49.085378   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:49.147253   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:49.147263   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:49.147274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:49.215977   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:49.215996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:51.751688   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:51.761744   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:51.761806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:51.790227   54219 cri.go:89] found id: ""
	I1212 19:58:51.790241   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.790248   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:51.790253   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:51.790309   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:51.817250   54219 cri.go:89] found id: ""
	I1212 19:58:51.817264   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.817271   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:51.817276   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:51.817346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:51.842832   54219 cri.go:89] found id: ""
	I1212 19:58:51.842845   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.842851   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:51.842856   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:51.842916   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:51.867234   54219 cri.go:89] found id: ""
	I1212 19:58:51.867249   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.867256   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:51.867261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:51.867315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:51.895349   54219 cri.go:89] found id: ""
	I1212 19:58:51.895364   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.895371   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:51.895376   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:51.895432   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:51.920577   54219 cri.go:89] found id: ""
	I1212 19:58:51.920594   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.920603   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:51.920612   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:51.920674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:51.945231   54219 cri.go:89] found id: ""
	I1212 19:58:51.945244   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.945251   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:51.945258   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:51.945268   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:52.004677   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:52.004694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:52.018082   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:52.018098   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:52.085848   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:52.085859   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:52.085869   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:52.155168   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:52.155196   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.685430   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:54.695280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:54.695335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:54.720974   54219 cri.go:89] found id: ""
	I1212 19:58:54.720988   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.720994   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:54.721001   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:54.721063   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:54.744863   54219 cri.go:89] found id: ""
	I1212 19:58:54.744876   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.744883   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:54.744888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:54.744943   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:54.768441   54219 cri.go:89] found id: ""
	I1212 19:58:54.768454   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.768461   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:54.768465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:54.768520   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:54.797540   54219 cri.go:89] found id: ""
	I1212 19:58:54.797554   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.797561   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:54.797566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:54.797633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:54.825756   54219 cri.go:89] found id: ""
	I1212 19:58:54.825770   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.825776   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:54.825782   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:54.825850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:54.853837   54219 cri.go:89] found id: ""
	I1212 19:58:54.853850   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.853857   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:54.853867   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:54.853921   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:54.880854   54219 cri.go:89] found id: ""
	I1212 19:58:54.880868   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.880874   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:54.880882   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:54.880892   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.908639   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:54.908655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:54.965093   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:54.965111   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:54.976121   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:54.976137   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:55.044063   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:55.044074   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:55.044085   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:57.606891   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:57.617246   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:57.617305   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:57.641248   54219 cri.go:89] found id: ""
	I1212 19:58:57.641261   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.641269   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:57.641274   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:57.641336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:57.666129   54219 cri.go:89] found id: ""
	I1212 19:58:57.666160   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.666167   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:57.666171   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:57.666226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:57.690889   54219 cri.go:89] found id: ""
	I1212 19:58:57.690902   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.690913   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:57.690918   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:57.690974   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:57.719998   54219 cri.go:89] found id: ""
	I1212 19:58:57.720012   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.720019   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:57.720024   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:57.720080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:57.745021   54219 cri.go:89] found id: ""
	I1212 19:58:57.745034   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.745041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:57.745046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:57.745102   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:57.769302   54219 cri.go:89] found id: ""
	I1212 19:58:57.769316   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.769322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:57.769327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:57.769383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:57.792874   54219 cri.go:89] found id: ""
	I1212 19:58:57.792887   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.792894   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:57.792902   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:57.792913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:57.821987   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:57.822003   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:57.878403   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:57.878420   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:57.889240   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:57.889255   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:57.955924   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:57.955936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:57.955948   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.519976   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:00.530412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:00.530471   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:00.562296   54219 cri.go:89] found id: ""
	I1212 19:59:00.562309   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.562316   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:00.562321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:00.562381   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:00.590126   54219 cri.go:89] found id: ""
	I1212 19:59:00.590140   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.590147   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:00.590152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:00.590208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:00.618262   54219 cri.go:89] found id: ""
	I1212 19:59:00.618276   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.618282   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:00.618287   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:00.618350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:00.643416   54219 cri.go:89] found id: ""
	I1212 19:59:00.643430   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.643437   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:00.643442   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:00.643497   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:00.668447   54219 cri.go:89] found id: ""
	I1212 19:59:00.668461   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.668469   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:00.668474   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:00.668534   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:00.695735   54219 cri.go:89] found id: ""
	I1212 19:59:00.695748   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.695755   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:00.695760   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:00.695820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:00.729197   54219 cri.go:89] found id: ""
	I1212 19:59:00.729211   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.729219   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:00.729226   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:00.729237   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:00.739980   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:00.739996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:00.812904   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:00.812914   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:00.812925   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.876760   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:00.876778   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:00.905954   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:00.905970   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.466026   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:03.476441   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:03.476505   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:03.512755   54219 cri.go:89] found id: ""
	I1212 19:59:03.512774   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.512781   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:03.512786   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:03.512844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:03.536972   54219 cri.go:89] found id: ""
	I1212 19:59:03.536992   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.536999   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:03.537004   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:03.537071   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:03.564981   54219 cri.go:89] found id: ""
	I1212 19:59:03.564995   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.565002   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:03.565006   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:03.565061   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:03.589258   54219 cri.go:89] found id: ""
	I1212 19:59:03.589271   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.589278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:03.589283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:03.589335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:03.617627   54219 cri.go:89] found id: ""
	I1212 19:59:03.617649   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.617656   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:03.617661   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:03.617724   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:03.643124   54219 cri.go:89] found id: ""
	I1212 19:59:03.643137   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.643144   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:03.643149   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:03.643205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:03.667587   54219 cri.go:89] found id: ""
	I1212 19:59:03.667601   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.667607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:03.667615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:03.667624   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.724310   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:03.724326   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:03.735089   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:03.735105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:03.799034   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:03.799043   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:03.799054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:03.861867   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:03.861885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:06.393541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:06.403453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:06.403511   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:06.427440   54219 cri.go:89] found id: ""
	I1212 19:59:06.427454   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.427460   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:06.427465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:06.427524   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:06.457341   54219 cri.go:89] found id: ""
	I1212 19:59:06.457355   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.457361   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:06.457366   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:06.457424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:06.495095   54219 cri.go:89] found id: ""
	I1212 19:59:06.495110   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.495116   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:06.495122   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:06.495179   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:06.522006   54219 cri.go:89] found id: ""
	I1212 19:59:06.522041   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.522048   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:06.522053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:06.522111   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:06.551005   54219 cri.go:89] found id: ""
	I1212 19:59:06.551019   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.551026   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:06.551031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:06.551099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:06.576063   54219 cri.go:89] found id: ""
	I1212 19:59:06.576089   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.576096   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:06.576101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:06.576157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:06.601543   54219 cri.go:89] found id: ""
	I1212 19:59:06.601557   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.601565   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:06.601572   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:06.601582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:06.657957   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:06.657977   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:06.668650   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:06.668665   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:06.730730   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:06.730739   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:06.730749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:06.793201   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:06.793219   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:09.321790   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:09.332762   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:09.332820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:09.359927   54219 cri.go:89] found id: ""
	I1212 19:59:09.359941   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.359948   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:09.359953   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:09.360026   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:09.385111   54219 cri.go:89] found id: ""
	I1212 19:59:09.385125   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.385137   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:09.385142   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:09.385201   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:09.416991   54219 cri.go:89] found id: ""
	I1212 19:59:09.417006   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.417013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:09.417018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:09.417077   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:09.442593   54219 cri.go:89] found id: ""
	I1212 19:59:09.442606   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.442612   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:09.442617   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:09.442672   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:09.469724   54219 cri.go:89] found id: ""
	I1212 19:59:09.469738   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.469745   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:09.469750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:09.469806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:09.506134   54219 cri.go:89] found id: ""
	I1212 19:59:09.506148   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.506154   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:09.506160   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:09.506226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:09.537548   54219 cri.go:89] found id: ""
	I1212 19:59:09.537561   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.537568   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:09.537576   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:09.537585   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:09.596110   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:09.596128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:09.607356   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:09.607373   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:09.678885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:09.678895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:09.678906   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:09.744120   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:09.744138   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:12.273229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:12.283400   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:12.283456   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:12.307126   54219 cri.go:89] found id: ""
	I1212 19:59:12.307140   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.307147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:12.307152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:12.307208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:12.333237   54219 cri.go:89] found id: ""
	I1212 19:59:12.333250   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.333257   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:12.333261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:12.333318   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:12.357336   54219 cri.go:89] found id: ""
	I1212 19:59:12.357349   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.357356   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:12.357361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:12.357416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:12.382066   54219 cri.go:89] found id: ""
	I1212 19:59:12.382080   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.382086   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:12.382091   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:12.382147   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:12.406069   54219 cri.go:89] found id: ""
	I1212 19:59:12.406082   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.406089   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:12.406094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:12.406149   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:12.434345   54219 cri.go:89] found id: ""
	I1212 19:59:12.434365   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.434372   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:12.434377   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:12.434457   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:12.466422   54219 cri.go:89] found id: ""
	I1212 19:59:12.466436   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.466444   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:12.466451   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:12.466462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:12.528768   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:12.528787   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:12.541490   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:12.541508   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:12.602589   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:12.602599   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:12.602609   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:12.664894   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:12.664913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:15.192235   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:15.202664   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:15.202722   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:15.227464   54219 cri.go:89] found id: ""
	I1212 19:59:15.227477   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.227484   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:15.227489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:15.227545   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:15.251075   54219 cri.go:89] found id: ""
	I1212 19:59:15.251089   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.251096   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:15.251101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:15.251156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:15.275993   54219 cri.go:89] found id: ""
	I1212 19:59:15.276006   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.276013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:15.276018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:15.276075   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:15.299883   54219 cri.go:89] found id: ""
	I1212 19:59:15.299896   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.299903   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:15.299908   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:15.299961   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:15.324623   54219 cri.go:89] found id: ""
	I1212 19:59:15.324636   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.324642   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:15.324647   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:15.324702   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:15.350461   54219 cri.go:89] found id: ""
	I1212 19:59:15.350474   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.350481   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:15.350486   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:15.350541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:15.375380   54219 cri.go:89] found id: ""
	I1212 19:59:15.375407   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.375415   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:15.375423   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:15.375434   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:15.431649   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:15.431669   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:15.444811   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:15.444836   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:15.537885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:15.537895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:15.537908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:15.604300   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:15.604319   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:18.136615   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:18.146971   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:18.147036   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:18.176330   54219 cri.go:89] found id: ""
	I1212 19:59:18.176344   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.176351   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:18.176359   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:18.176416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:18.200844   54219 cri.go:89] found id: ""
	I1212 19:59:18.200857   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.200863   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:18.200868   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:18.200924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:18.224026   54219 cri.go:89] found id: ""
	I1212 19:59:18.224040   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.224046   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:18.224051   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:18.224107   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:18.252073   54219 cri.go:89] found id: ""
	I1212 19:59:18.252086   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.252093   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:18.252098   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:18.252153   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:18.277440   54219 cri.go:89] found id: ""
	I1212 19:59:18.277454   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.277460   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:18.277465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:18.277521   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:18.302183   54219 cri.go:89] found id: ""
	I1212 19:59:18.302197   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.302214   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:18.302220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:18.302286   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:18.326037   54219 cri.go:89] found id: ""
	I1212 19:59:18.326058   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.326065   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:18.326073   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:18.326083   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:18.380825   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:18.380843   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:18.391618   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:18.391634   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:18.463287   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:18.463297   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:18.463309   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:18.536948   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:18.536967   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:21.064758   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:21.074846   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:21.074903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:21.099031   54219 cri.go:89] found id: ""
	I1212 19:59:21.099044   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.099051   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:21.099056   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:21.099109   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:21.123108   54219 cri.go:89] found id: ""
	I1212 19:59:21.123121   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.123127   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:21.123132   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:21.123187   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:21.146869   54219 cri.go:89] found id: ""
	I1212 19:59:21.146883   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.146890   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:21.146895   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:21.146964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:21.171309   54219 cri.go:89] found id: ""
	I1212 19:59:21.171323   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.171329   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:21.171340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:21.171395   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:21.195200   54219 cri.go:89] found id: ""
	I1212 19:59:21.195213   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.195219   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:21.195224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:21.195282   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:21.218648   54219 cri.go:89] found id: ""
	I1212 19:59:21.218661   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.218668   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:21.218673   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:21.218726   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:21.243375   54219 cri.go:89] found id: ""
	I1212 19:59:21.243388   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.243395   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:21.243402   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:21.243411   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:21.299185   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:21.299202   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:21.309826   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:21.309840   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:21.373437   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:21.373447   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:21.373457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:21.435817   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:21.435878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:23.968994   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:23.978907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:23.978964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:24.004005   54219 cri.go:89] found id: ""
	I1212 19:59:24.004018   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.004025   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:24.004030   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:24.004085   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:24.031561   54219 cri.go:89] found id: ""
	I1212 19:59:24.031576   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.031583   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:24.031588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:24.031648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:24.058089   54219 cri.go:89] found id: ""
	I1212 19:59:24.058105   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.058113   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:24.058120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:24.058183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:24.083693   54219 cri.go:89] found id: ""
	I1212 19:59:24.083707   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.083713   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:24.083718   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:24.083774   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:24.110732   54219 cri.go:89] found id: ""
	I1212 19:59:24.110746   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.110753   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:24.110758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:24.110814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:24.135252   54219 cri.go:89] found id: ""
	I1212 19:59:24.135266   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.135273   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:24.135278   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:24.135330   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:24.158751   54219 cri.go:89] found id: ""
	I1212 19:59:24.158765   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.158771   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:24.158779   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:24.158788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:24.188496   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:24.188513   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:24.244683   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:24.244701   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:24.255424   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:24.255440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:24.324102   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:24.324113   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:24.324126   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:26.896008   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:26.906451   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:26.906508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:26.930525   54219 cri.go:89] found id: ""
	I1212 19:59:26.930538   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.930546   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:26.930551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:26.930607   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:26.954197   54219 cri.go:89] found id: ""
	I1212 19:59:26.954212   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.954219   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:26.954224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:26.954284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:26.978362   54219 cri.go:89] found id: ""
	I1212 19:59:26.978375   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.978381   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:26.978388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:26.978444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:27.003156   54219 cri.go:89] found id: ""
	I1212 19:59:27.003170   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.003177   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:27.003182   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:27.003241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:27.035090   54219 cri.go:89] found id: ""
	I1212 19:59:27.035103   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.035110   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:27.035115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:27.035170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:27.059270   54219 cri.go:89] found id: ""
	I1212 19:59:27.059284   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.059291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:27.059296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:27.059351   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:27.083068   54219 cri.go:89] found id: ""
	I1212 19:59:27.083081   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.083088   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:27.083096   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:27.083105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:27.138962   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:27.138979   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:27.149646   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:27.149662   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:27.216025   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:27.216036   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:27.216046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:27.277808   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:27.277826   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:29.806087   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:29.816453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:29.816508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:29.839921   54219 cri.go:89] found id: ""
	I1212 19:59:29.839935   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.839943   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:29.839950   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:29.840023   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:29.868215   54219 cri.go:89] found id: ""
	I1212 19:59:29.868229   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.868236   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:29.868241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:29.868298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:29.892199   54219 cri.go:89] found id: ""
	I1212 19:59:29.892212   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.892219   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:29.892226   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:29.892281   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:29.921316   54219 cri.go:89] found id: ""
	I1212 19:59:29.921330   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.921336   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:29.921351   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:29.921415   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:29.946039   54219 cri.go:89] found id: ""
	I1212 19:59:29.946053   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.946059   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:29.946064   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:29.946125   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:29.976514   54219 cri.go:89] found id: ""
	I1212 19:59:29.976528   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.976536   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:29.976541   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:29.976601   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:30.000755   54219 cri.go:89] found id: ""
	I1212 19:59:30.000768   54219 logs.go:282] 0 containers: []
	W1212 19:59:30.000775   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:30.000783   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:30.000793   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:30.058301   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:30.058321   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:30.070295   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:30.070312   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:30.139764   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:30.139775   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:30.139786   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:30.203348   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:30.203371   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:32.732603   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:32.743210   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:32.743266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:32.775589   54219 cri.go:89] found id: ""
	I1212 19:59:32.775603   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.775610   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:32.775614   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:32.775673   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:32.799717   54219 cri.go:89] found id: ""
	I1212 19:59:32.799730   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.799737   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:32.799742   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:32.799801   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:32.826819   54219 cri.go:89] found id: ""
	I1212 19:59:32.826832   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.826839   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:32.826844   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:32.826902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:32.851752   54219 cri.go:89] found id: ""
	I1212 19:59:32.851765   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.851772   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:32.851777   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:32.851832   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:32.876003   54219 cri.go:89] found id: ""
	I1212 19:59:32.876017   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.876024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:32.876035   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:32.876093   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:32.902460   54219 cri.go:89] found id: ""
	I1212 19:59:32.902474   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.902480   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:32.902504   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:32.902560   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:32.925773   54219 cri.go:89] found id: ""
	I1212 19:59:32.925787   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.925793   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:32.925802   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:32.925812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:32.936160   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:32.936177   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:33.000494   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:33.000505   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:33.000515   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:33.066244   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:33.066264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:33.096113   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:33.096128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:35.653289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:35.663651   54219 kubeadm.go:602] duration metric: took 4m3.519380388s to restartPrimaryControlPlane
	W1212 19:59:35.663714   54219 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 19:59:35.663796   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 19:59:36.078838   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:59:36.092917   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:59:36.101391   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:59:36.101446   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:59:36.109781   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:59:36.109792   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 19:59:36.109842   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:59:36.118044   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:59:36.118100   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:59:36.125732   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:59:36.133647   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:59:36.133711   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:59:36.141349   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.149338   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:59:36.149401   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.156798   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:59:36.164406   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:59:36.164460   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:59:36.171816   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:59:36.215707   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:59:36.215925   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:59:36.287068   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:59:36.287132   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 19:59:36.287172   54219 kubeadm.go:319] OS: Linux
	I1212 19:59:36.287216   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:59:36.287263   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:59:36.287309   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:59:36.287356   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:59:36.287415   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:59:36.287462   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:59:36.287505   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:59:36.287552   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:59:36.287596   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:59:36.350092   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:59:36.350201   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:59:36.350291   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:59:36.357029   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:59:36.360551   54219 out.go:252]   - Generating certificates and keys ...
	I1212 19:59:36.360649   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:59:36.360718   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:59:36.360805   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 19:59:36.360872   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 19:59:36.360946   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 19:59:36.361003   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 19:59:36.361117   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 19:59:36.361314   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 19:59:36.361808   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 19:59:36.362227   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 19:59:36.362588   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 19:59:36.362716   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:59:36.513194   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:59:36.762182   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:59:37.087768   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:59:37.827220   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:59:38.025150   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:59:38.026038   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:59:38.030783   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:59:38.034177   54219 out.go:252]   - Booting up control plane ...
	I1212 19:59:38.034305   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:59:38.035144   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:59:38.036428   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:59:38.058524   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:59:38.058720   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:59:38.067348   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:59:38.067823   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:59:38.067969   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:59:38.202645   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:59:38.202775   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:03:38.203202   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000900998s
	I1212 20:03:38.203226   54219 kubeadm.go:319] 
	I1212 20:03:38.203283   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:03:38.203315   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:03:38.203419   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:03:38.203424   54219 kubeadm.go:319] 
	I1212 20:03:38.203527   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:03:38.203558   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:03:38.203588   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:03:38.203591   54219 kubeadm.go:319] 
	I1212 20:03:38.208746   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:03:38.209173   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:03:38.209280   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:03:38.209544   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:03:38.209548   54219 kubeadm.go:319] 
	I1212 20:03:38.209616   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:03:38.209718   54219 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000900998s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:03:38.209803   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 20:03:38.624272   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:03:38.637409   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:03:38.637464   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:03:38.645037   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:03:38.645047   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 20:03:38.645093   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:03:38.652503   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:03:38.652568   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:03:38.659596   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:03:38.667127   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:03:38.667190   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:03:38.674737   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.682321   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:03:38.682373   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.689635   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:03:38.696927   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:03:38.696978   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:03:38.704097   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:03:38.743640   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:03:38.743913   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:03:38.814950   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:03:38.815010   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:03:38.815042   54219 kubeadm.go:319] OS: Linux
	I1212 20:03:38.815098   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:03:38.815149   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:03:38.815192   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:03:38.815236   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:03:38.815280   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:03:38.815324   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:03:38.815365   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:03:38.815409   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:03:38.815451   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:03:38.887100   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:03:38.887197   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:03:38.887281   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:03:38.896370   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:03:38.901736   54219 out.go:252]   - Generating certificates and keys ...
	I1212 20:03:38.901817   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:03:38.901877   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:03:38.901950   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:03:38.902007   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:03:38.902071   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:03:38.902127   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:03:38.902186   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:03:38.902243   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:03:38.902321   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:03:38.902389   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:03:38.902423   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:03:38.902476   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:03:39.125808   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:03:39.338381   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:03:39.401460   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:03:39.625424   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:03:39.783055   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:03:39.783603   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:03:39.786147   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:03:39.789268   54219 out.go:252]   - Booting up control plane ...
	I1212 20:03:39.789370   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:03:39.789458   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:03:39.790103   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:03:39.810111   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:03:39.810207   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:03:39.818331   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:03:39.818818   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:03:39.818950   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:03:39.956538   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:03:39.956645   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:07:39.951298   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001147362s
	I1212 20:07:39.951324   54219 kubeadm.go:319] 
	I1212 20:07:39.951381   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:07:39.951413   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:07:39.951517   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:07:39.951522   54219 kubeadm.go:319] 
	I1212 20:07:39.951625   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:07:39.951656   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:07:39.951686   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:07:39.951689   54219 kubeadm.go:319] 
	I1212 20:07:39.955566   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:07:39.956028   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:07:39.956162   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:07:39.956426   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:07:39.956433   54219 kubeadm.go:319] 
	I1212 20:07:39.956501   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:07:39.956558   54219 kubeadm.go:403] duration metric: took 12m7.846093292s to StartCluster
	I1212 20:07:39.956588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:07:39.956652   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:07:39.984872   54219 cri.go:89] found id: ""
	I1212 20:07:39.984887   54219 logs.go:282] 0 containers: []
	W1212 20:07:39.984894   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:39.984900   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:07:39.984958   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:07:40.008408   54219 cri.go:89] found id: ""
	I1212 20:07:40.008426   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.008433   54219 logs.go:284] No container was found matching "etcd"
	I1212 20:07:40.008439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:07:40.008502   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:07:40.051885   54219 cri.go:89] found id: ""
	I1212 20:07:40.051899   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.051906   54219 logs.go:284] No container was found matching "coredns"
	I1212 20:07:40.051911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:07:40.051971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:07:40.078448   54219 cri.go:89] found id: ""
	I1212 20:07:40.078462   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.078469   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:40.078473   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:07:40.078533   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:07:40.105530   54219 cri.go:89] found id: ""
	I1212 20:07:40.105555   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.105562   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:40.105568   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:07:40.105632   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:07:40.134868   54219 cri.go:89] found id: ""
	I1212 20:07:40.134884   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.134911   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:40.134917   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:07:40.134977   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:07:40.160769   54219 cri.go:89] found id: ""
	I1212 20:07:40.160782   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.160789   54219 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:40.160798   54219 logs.go:123] Gathering logs for container status ...
	I1212 20:07:40.160808   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:40.187973   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:40.187990   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:40.250924   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:40.250942   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:40.266149   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:40.266165   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:40.328697   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:40.328707   54219 logs.go:123] Gathering logs for containerd ...
	I1212 20:07:40.328717   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 20:07:40.395302   54219 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:07:40.395340   54219 out.go:285] * 
	W1212 20:07:40.395406   54219 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.395426   54219 out.go:285] * 
	W1212 20:07:40.397542   54219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:07:40.403271   54219 out.go:203] 
	W1212 20:07:40.407023   54219 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.407080   54219 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:07:40.407103   54219 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:07:40.410913   54219 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409212463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409233845Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409270693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409288186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409297991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409313604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409322646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409334633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409357730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409389073Z" level=info msg="Connect containerd service"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409646705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.410157440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.430913489Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431535088Z" level=info msg="Start recovering state"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431784515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431871117Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469097271Z" level=info msg="Start event monitor"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469264239Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469333685Z" level=info msg="Start streaming server"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469389199Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469443014Z" level=info msg="runtime interface starting up..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469502196Z" level=info msg="starting plugins..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469562690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:55:30 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.471989321Z" level=info msg="containerd successfully booted in 0.083546s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:41.605700   21000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:41.606392   21000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:41.607978   21000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:41.608289   21000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:41.609885   21000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:07:41 up 50 min,  0 user,  load average: 0.16, 0.19, 0.35
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:07:37 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:38 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 12 20:07:38 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:38 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:38 functional-384006 kubelet[20802]: E1212 20:07:38.737281   20802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:38 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:38 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:39 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 20:07:39 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:39 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:39 functional-384006 kubelet[20807]: E1212 20:07:39.487770   20807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:39 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:39 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:40 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 20:07:40 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:40 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:40 functional-384006 kubelet[20882]: E1212 20:07:40.259249   20882 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:40 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:40 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:40 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 20:07:40 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:40 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:41 functional-384006 kubelet[20915]: E1212 20:07:41.007304   20915 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:41 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:41 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (378.913807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-384006 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-384006 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (57.000928ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-384006 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (341.141402ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-008271 image ls --format yaml --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format json --alsologtostderr                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls --format table --alsologtostderr                                                                                             │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ ssh     │ functional-008271 ssh pgrep buildkitd                                                                                                                   │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ image   │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr                                                  │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ image   │ functional-008271 image ls                                                                                                                              │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ delete  │ -p functional-008271                                                                                                                                    │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
	│ start   │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │                     │
	│ start   │ -p functional-384006 --alsologtostderr -v=8                                                                                                             │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:49 UTC │                     │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add registry.k8s.io/pause:latest                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache add minikube-local-cache-test:functional-384006                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ functional-384006 cache delete minikube-local-cache-test:functional-384006                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl images                                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ cache   │ functional-384006 cache reload                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ kubectl │ functional-384006 kubectl -- --context functional-384006 get pods                                                                                       │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ start   │ -p functional-384006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:55:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:55:27.852724   54219 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:55:27.853298   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853302   54219 out.go:374] Setting ErrFile to fd 2...
	I1212 19:55:27.853307   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853572   54219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:55:27.853965   54219 out.go:368] Setting JSON to false
	I1212 19:55:27.854729   54219 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2277,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:55:27.854784   54219 start.go:143] virtualization:  
	I1212 19:55:27.858422   54219 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:55:27.861585   54219 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:55:27.861670   54219 notify.go:221] Checking for updates...
	I1212 19:55:27.868224   54219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:55:27.871239   54219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:55:27.874218   54219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:55:27.877241   54219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:55:27.880290   54219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:55:27.883683   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:27.883824   54219 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:55:27.904994   54219 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:55:27.905107   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:27.972320   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:27.96314904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:27.972416   54219 docker.go:319] overlay module found
	I1212 19:55:27.975641   54219 out.go:179] * Using the docker driver based on existing profile
	I1212 19:55:27.978549   54219 start.go:309] selected driver: docker
	I1212 19:55:27.978557   54219 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:27.978631   54219 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:55:27.978726   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:28.035973   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:28.026224666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:28.036393   54219 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:55:28.036415   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:28.036463   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:28.036537   54219 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:28.039865   54219 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:55:28.042798   54219 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:55:28.046082   54219 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:55:28.048968   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:28.049006   54219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:55:28.049015   54219 cache.go:65] Caching tarball of preloaded images
	I1212 19:55:28.049057   54219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:55:28.049116   54219 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:55:28.049125   54219 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:55:28.049240   54219 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:55:28.070140   54219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:55:28.070152   54219 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:55:28.070172   54219 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:55:28.070201   54219 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:55:28.070267   54219 start.go:364] duration metric: took 47.145µs to acquireMachinesLock for "functional-384006"
	I1212 19:55:28.070285   54219 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:55:28.070289   54219 fix.go:54] fixHost starting: 
	I1212 19:55:28.070558   54219 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:55:28.087483   54219 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:55:28.087503   54219 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:55:28.090814   54219 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:55:28.090839   54219 machine.go:94] provisionDockerMachine start ...
	I1212 19:55:28.090929   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.108521   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.108845   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.108851   54219 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:55:28.259057   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.259071   54219 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:55:28.259129   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.275402   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.275704   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.275713   54219 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:55:28.436755   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.436820   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.461420   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.461717   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.461739   54219 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:55:28.612044   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:55:28.612060   54219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:55:28.612075   54219 ubuntu.go:190] setting up certificates
	I1212 19:55:28.612092   54219 provision.go:84] configureAuth start
	I1212 19:55:28.612163   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:28.632765   54219 provision.go:143] copyHostCerts
	I1212 19:55:28.632832   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:55:28.632839   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:55:28.632906   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:55:28.633087   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:55:28.633091   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:55:28.633116   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:55:28.633174   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:55:28.633178   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:55:28.633202   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:55:28.633253   54219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:55:28.793482   54219 provision.go:177] copyRemoteCerts
	I1212 19:55:28.793529   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:55:28.793567   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.810312   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:28.915572   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:55:28.933605   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:55:28.951138   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:55:28.968522   54219 provision.go:87] duration metric: took 356.418282ms to configureAuth
	I1212 19:55:28.968541   54219 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:55:28.968740   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:28.968745   54219 machine.go:97] duration metric: took 877.902402ms to provisionDockerMachine
	I1212 19:55:28.968752   54219 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:55:28.968762   54219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:55:28.968808   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:55:28.968851   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.987014   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.092173   54219 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:55:29.095606   54219 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:55:29.095622   54219 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:55:29.095634   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:55:29.095686   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:55:29.095770   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:55:29.095858   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:55:29.095909   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:55:29.103304   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:29.119777   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:55:29.137094   54219 start.go:296] duration metric: took 168.327905ms for postStartSetup
	I1212 19:55:29.137179   54219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:55:29.137221   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.155438   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.256753   54219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:55:29.261489   54219 fix.go:56] duration metric: took 1.191194255s for fixHost
	I1212 19:55:29.261504   54219 start.go:83] releasing machines lock for "functional-384006", held for 1.19123098s
	I1212 19:55:29.261570   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:29.278501   54219 ssh_runner.go:195] Run: cat /version.json
	I1212 19:55:29.278542   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.278786   54219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:55:29.278838   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.300866   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.303322   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.403647   54219 ssh_runner.go:195] Run: systemctl --version
	I1212 19:55:29.503423   54219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:55:29.507672   54219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:55:29.507733   54219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:55:29.515681   54219 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:55:29.515695   54219 start.go:496] detecting cgroup driver to use...
	I1212 19:55:29.515726   54219 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:55:29.515780   54219 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:55:29.531132   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:55:29.543869   54219 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:55:29.543922   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:55:29.559268   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:55:29.572058   54219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:55:29.685297   54219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:55:29.805225   54219 docker.go:234] disabling docker service ...
	I1212 19:55:29.805279   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:55:29.822098   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:55:29.834865   54219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:55:29.949324   54219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:55:30.087483   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:55:30.100955   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:55:30.116237   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:55:30.126127   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:55:30.136085   54219 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:55:30.136147   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:55:30.145914   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.154991   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:55:30.163972   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.172470   54219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:55:30.180930   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:55:30.190361   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:55:30.199337   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:55:30.208975   54219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:55:30.216623   54219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:55:30.223993   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.330122   54219 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:55:30.473295   54219 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:55:30.473369   54219 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:55:30.477639   54219 start.go:564] Will wait 60s for crictl version
	I1212 19:55:30.477693   54219 ssh_runner.go:195] Run: which crictl
	I1212 19:55:30.481548   54219 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:55:30.504633   54219 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:55:30.504687   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.523789   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.548955   54219 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:55:30.551786   54219 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:55:30.567944   54219 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:55:30.574767   54219 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 19:55:30.577669   54219 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:55:30.577791   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:30.577868   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.602150   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.602162   54219 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:55:30.602217   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.625907   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.625919   54219 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:55:30.625925   54219 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:55:30.626026   54219 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:55:30.626113   54219 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:55:30.649188   54219 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 19:55:30.649208   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:30.649216   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:30.649224   54219 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:55:30.649244   54219 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:55:30.649349   54219 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:55:30.649412   54219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:55:30.656757   54219 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:55:30.656810   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:55:30.663814   54219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:55:30.675878   54219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:55:30.688262   54219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 19:55:30.703971   54219 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:55:30.708408   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.839166   54219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:55:31.445221   54219 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:55:31.445232   54219 certs.go:195] generating shared ca certs ...
	I1212 19:55:31.445248   54219 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:55:31.445419   54219 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:55:31.445478   54219 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:55:31.445485   54219 certs.go:257] generating profile certs ...
	I1212 19:55:31.445581   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:55:31.445645   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:55:31.445694   54219 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:55:31.445823   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:55:31.445865   54219 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:55:31.445873   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:55:31.445899   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:55:31.445931   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:55:31.445954   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:55:31.446005   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:31.446654   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:55:31.468075   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:55:31.484808   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:55:31.501104   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:55:31.519018   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:55:31.536328   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:55:31.553581   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:55:31.570191   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:55:31.586954   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:55:31.603358   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:55:31.620509   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:55:31.637987   54219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:55:31.650484   54219 ssh_runner.go:195] Run: openssl version
	I1212 19:55:31.656450   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.663636   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:55:31.671141   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674842   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674900   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.715596   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:55:31.723059   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.730233   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:55:31.737626   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741161   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741213   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.783908   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:55:31.791542   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.799333   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:55:31.806999   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810570   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810630   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.851440   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:55:31.858926   54219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:55:31.862520   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:55:31.903666   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:55:31.944997   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:55:31.985858   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:55:32.026779   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:55:32.067925   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:55:32.110481   54219 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:32.110555   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:55:32.110624   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.136703   54219 cri.go:89] found id: ""
	I1212 19:55:32.136771   54219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:55:32.144223   54219 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:55:32.144262   54219 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:55:32.144312   54219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:55:32.151339   54219 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.151833   54219 kubeconfig.go:125] found "functional-384006" server: "https://192.168.49.2:8441"
	I1212 19:55:32.153024   54219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:55:32.160890   54219 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:40:57.602349197 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 19:55:30.697011388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 19:55:32.160901   54219 kubeadm.go:1161] stopping kube-system containers ...
	I1212 19:55:32.160919   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 19:55:32.160971   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.185826   54219 cri.go:89] found id: ""
	I1212 19:55:32.185884   54219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 19:55:32.204086   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:55:32.212130   54219 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 19:45 /etc/kubernetes/scheduler.conf
	
	I1212 19:55:32.212191   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:55:32.219934   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:55:32.227897   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.227949   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:55:32.235243   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.242858   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.242920   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.250701   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:55:32.258298   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.258372   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:55:32.265710   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:55:32.273454   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:32.324121   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:33.892385   54219 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568235814s)
	I1212 19:55:33.892459   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.100445   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.171354   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.217083   54219 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:55:34.217158   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:34.717278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.217351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.717787   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.217788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.218074   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.717373   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.218212   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.717990   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.217746   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.717717   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.217500   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.718081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.217959   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.717497   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.218218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.217997   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.217978   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.717885   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.217387   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.718121   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.217288   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.217318   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.717728   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.218067   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.717326   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.217512   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.717353   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.717983   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.217333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.717999   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.217773   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.717402   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.217334   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.218070   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.717712   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.217290   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.718107   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.217424   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.717836   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.217448   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.217955   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.717942   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.218252   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.717973   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.218214   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.718129   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.217818   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.218222   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.717312   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.217601   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.717316   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.217287   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.718088   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.717294   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.218217   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.717867   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.717349   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.217366   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.717546   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.218108   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.717381   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.217293   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.717333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.217921   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.717764   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.217784   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.718179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.218229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.717368   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.217920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.717247   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.218046   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.717383   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.218006   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.718040   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.217291   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.717910   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.218203   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.717788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.217278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.718149   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.217534   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.717322   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.218045   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.717355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.218081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.218208   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.717289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.217232   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.717930   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.218161   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.718192   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.217327   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.717452   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.218230   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.217306   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.717853   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.218101   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.717649   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.218027   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.718035   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.218050   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.717819   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.217245   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.717370   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:34.217941   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:34.218012   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:34.255372   54219 cri.go:89] found id: ""
	I1212 19:56:34.255386   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.255399   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:34.255404   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:34.255464   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:34.281284   54219 cri.go:89] found id: ""
	I1212 19:56:34.281297   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.281303   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:34.281308   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:34.281363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:34.304259   54219 cri.go:89] found id: ""
	I1212 19:56:34.304273   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.304279   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:34.304284   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:34.304338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:34.327600   54219 cri.go:89] found id: ""
	I1212 19:56:34.327613   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.327620   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:34.327625   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:34.327678   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:34.352303   54219 cri.go:89] found id: ""
	I1212 19:56:34.352317   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.352323   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:34.352328   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:34.352385   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:34.375938   54219 cri.go:89] found id: ""
	I1212 19:56:34.375951   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.375958   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:34.375963   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:34.376019   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:34.399635   54219 cri.go:89] found id: ""
	I1212 19:56:34.399648   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.399655   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:34.399663   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:34.399675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:34.457482   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:34.457501   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:34.467864   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:34.467879   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:34.532394   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:34.532405   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:34.532415   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:34.595426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:34.595444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:37.126278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:37.136103   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:37.136162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:37.160403   54219 cri.go:89] found id: ""
	I1212 19:56:37.160416   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.160422   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:37.160428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:37.160483   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:37.184487   54219 cri.go:89] found id: ""
	I1212 19:56:37.184500   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.184507   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:37.184512   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:37.184582   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:37.226352   54219 cri.go:89] found id: ""
	I1212 19:56:37.226366   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.226373   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:37.226378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:37.226435   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:37.258223   54219 cri.go:89] found id: ""
	I1212 19:56:37.258267   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.258274   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:37.258280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:37.258349   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:37.285540   54219 cri.go:89] found id: ""
	I1212 19:56:37.285554   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.285561   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:37.285566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:37.285622   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:37.309113   54219 cri.go:89] found id: ""
	I1212 19:56:37.309126   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.309132   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:37.309147   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:37.309226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:37.332041   54219 cri.go:89] found id: ""
	I1212 19:56:37.332054   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.332061   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:37.332069   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:37.332079   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:37.387421   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:37.387440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:37.397657   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:37.397672   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:37.461255   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:37.461265   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:37.461275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:37.523429   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:37.523446   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:40.054218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:40.066551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:40.066620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:40.099245   54219 cri.go:89] found id: ""
	I1212 19:56:40.099260   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.099267   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:40.099273   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:40.099336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:40.127637   54219 cri.go:89] found id: ""
	I1212 19:56:40.127653   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.127660   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:40.127666   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:40.127728   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:40.154877   54219 cri.go:89] found id: ""
	I1212 19:56:40.154892   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.154899   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:40.154904   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:40.154966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:40.186457   54219 cri.go:89] found id: ""
	I1212 19:56:40.186471   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.186478   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:40.186483   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:40.186540   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:40.223505   54219 cri.go:89] found id: ""
	I1212 19:56:40.223520   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.223527   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:40.223532   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:40.223589   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:40.264967   54219 cri.go:89] found id: ""
	I1212 19:56:40.264981   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.264987   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:40.264992   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:40.265064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:40.288851   54219 cri.go:89] found id: ""
	I1212 19:56:40.288865   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.288871   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:40.288879   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:40.288889   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:40.345104   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:40.345122   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:40.355393   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:40.355408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:40.421074   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:40.421086   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:40.421100   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:40.484292   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:40.484310   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:43.012558   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:43.022764   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:43.022820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:43.046602   54219 cri.go:89] found id: ""
	I1212 19:56:43.046617   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.046623   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:43.046628   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:43.046688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:43.070683   54219 cri.go:89] found id: ""
	I1212 19:56:43.070697   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.070703   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:43.070715   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:43.070769   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:43.094890   54219 cri.go:89] found id: ""
	I1212 19:56:43.094904   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.094911   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:43.094915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:43.094971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:43.123965   54219 cri.go:89] found id: ""
	I1212 19:56:43.123978   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.123984   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:43.123989   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:43.124043   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:43.149003   54219 cri.go:89] found id: ""
	I1212 19:56:43.149017   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.149024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:43.149028   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:43.149084   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:43.177565   54219 cri.go:89] found id: ""
	I1212 19:56:43.177578   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.177584   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:43.177589   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:43.177654   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:43.203765   54219 cri.go:89] found id: ""
	I1212 19:56:43.203779   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.203785   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:43.203793   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:43.203803   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:43.267789   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:43.267807   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:43.278476   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:43.278493   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:43.342414   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:43.342426   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:43.342436   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:43.406378   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:43.406398   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:45.939180   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:45.950923   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:45.950984   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:45.980081   54219 cri.go:89] found id: ""
	I1212 19:56:45.980095   54219 logs.go:282] 0 containers: []
	W1212 19:56:45.980102   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:45.980106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:45.980162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:46.008401   54219 cri.go:89] found id: ""
	I1212 19:56:46.008417   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.008425   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:46.008431   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:46.008500   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:46.037350   54219 cri.go:89] found id: ""
	I1212 19:56:46.037364   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.037382   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:46.037388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:46.037447   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:46.062477   54219 cri.go:89] found id: ""
	I1212 19:56:46.062491   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.062498   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:46.062503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:46.062562   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:46.088314   54219 cri.go:89] found id: ""
	I1212 19:56:46.088328   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.088335   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:46.088340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:46.088397   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:46.118483   54219 cri.go:89] found id: ""
	I1212 19:56:46.118496   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.118503   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:46.118513   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:46.118574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:46.142723   54219 cri.go:89] found id: ""
	I1212 19:56:46.142737   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.142744   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:46.142752   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:46.142773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:46.213691   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:46.213700   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:46.213710   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:46.286149   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:46.286168   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:46.313728   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:46.313743   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:46.372694   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:46.372711   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:48.883344   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:48.893476   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:48.893532   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:48.917365   54219 cri.go:89] found id: ""
	I1212 19:56:48.917379   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.917386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:48.917391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:48.917446   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:48.941342   54219 cri.go:89] found id: ""
	I1212 19:56:48.941356   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.941363   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:48.941367   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:48.941428   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:48.966988   54219 cri.go:89] found id: ""
	I1212 19:56:48.967001   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.967008   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:48.967013   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:48.967070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:48.990387   54219 cri.go:89] found id: ""
	I1212 19:56:48.990400   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.990407   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:48.990412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:48.990474   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:49.016237   54219 cri.go:89] found id: ""
	I1212 19:56:49.016251   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.016257   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:49.016263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:49.016334   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:49.040263   54219 cri.go:89] found id: ""
	I1212 19:56:49.040276   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.040283   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:49.040289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:49.040346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:49.064604   54219 cri.go:89] found id: ""
	I1212 19:56:49.064618   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.064625   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:49.064633   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:49.064643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:49.122132   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:49.122150   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:49.132901   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:49.132916   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:49.203010   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:49.203028   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:49.203038   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:49.277223   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:49.277242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:51.807432   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:51.817646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:51.817706   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:51.843424   54219 cri.go:89] found id: ""
	I1212 19:56:51.843438   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.843444   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:51.843449   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:51.843510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:51.868210   54219 cri.go:89] found id: ""
	I1212 19:56:51.868223   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.868230   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:51.868235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:51.868290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:51.892493   54219 cri.go:89] found id: ""
	I1212 19:56:51.892506   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.892513   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:51.892518   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:51.892577   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:51.917111   54219 cri.go:89] found id: ""
	I1212 19:56:51.917124   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.917143   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:51.917148   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:51.917203   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:51.945367   54219 cri.go:89] found id: ""
	I1212 19:56:51.945381   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.945387   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:51.945392   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:51.945449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:51.970026   54219 cri.go:89] found id: ""
	I1212 19:56:51.970040   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.970047   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:51.970053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:51.970108   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:51.994534   54219 cri.go:89] found id: ""
	I1212 19:56:51.994547   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.994553   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:51.994563   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:51.994573   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:52.028818   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:52.028848   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:52.090429   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:52.090450   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:52.101879   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:52.101895   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:52.171776   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:52.171787   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:52.171800   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:54.740626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:54.750925   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:54.750995   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:54.780366   54219 cri.go:89] found id: ""
	I1212 19:56:54.780379   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.780386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:54.780391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:54.780449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:54.804094   54219 cri.go:89] found id: ""
	I1212 19:56:54.804107   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.804113   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:54.804118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:54.804173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:54.828262   54219 cri.go:89] found id: ""
	I1212 19:56:54.828276   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.828283   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:54.828288   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:54.828346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:54.851328   54219 cri.go:89] found id: ""
	I1212 19:56:54.851340   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.851347   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:54.851352   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:54.851406   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:54.874948   54219 cri.go:89] found id: ""
	I1212 19:56:54.874971   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.874978   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:54.874983   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:54.875049   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:54.899059   54219 cri.go:89] found id: ""
	I1212 19:56:54.899072   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.899079   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:54.899085   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:54.899139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:54.922912   54219 cri.go:89] found id: ""
	I1212 19:56:54.922944   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.922952   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:54.922959   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:54.922969   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:54.982944   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:54.982963   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:54.993620   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:54.993643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:55.063883   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:55.063895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:55.063905   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:55.126641   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:55.126661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:57.654341   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:57.664332   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:57.664398   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:57.690295   54219 cri.go:89] found id: ""
	I1212 19:56:57.690312   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.690319   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:57.690324   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:57.690378   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:57.715389   54219 cri.go:89] found id: ""
	I1212 19:56:57.715403   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.715409   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:57.715414   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:57.715485   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:57.741214   54219 cri.go:89] found id: ""
	I1212 19:56:57.741228   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.741234   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:57.741239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:57.741302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:57.766791   54219 cri.go:89] found id: ""
	I1212 19:56:57.766804   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.766811   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:57.766817   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:57.766876   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:57.791413   54219 cri.go:89] found id: ""
	I1212 19:56:57.791427   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.791434   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:57.791439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:57.791494   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:57.815197   54219 cri.go:89] found id: ""
	I1212 19:56:57.815211   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.815218   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:57.815223   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:57.815291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:57.839238   54219 cri.go:89] found id: ""
	I1212 19:56:57.839251   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.839258   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:57.839265   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:57.839275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:57.895387   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:57.895408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:57.906723   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:57.906738   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:57.970462   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:57.970473   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:57.970483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:58.035426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:58.035459   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.567794   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:00.577750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:00.577811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:00.601472   54219 cri.go:89] found id: ""
	I1212 19:57:00.601485   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.601492   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:00.601497   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:00.601552   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:00.624990   54219 cri.go:89] found id: ""
	I1212 19:57:00.625003   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.625009   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:00.625014   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:00.625069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:00.652831   54219 cri.go:89] found id: ""
	I1212 19:57:00.652845   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.652852   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:00.652857   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:00.652913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:00.676463   54219 cri.go:89] found id: ""
	I1212 19:57:00.676477   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.676484   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:00.676489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:00.676544   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:00.700820   54219 cri.go:89] found id: ""
	I1212 19:57:00.700833   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.700840   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:00.700845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:00.700904   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:00.728048   54219 cri.go:89] found id: ""
	I1212 19:57:00.728061   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.728068   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:00.728073   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:00.728129   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:00.754114   54219 cri.go:89] found id: ""
	I1212 19:57:00.754127   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.754134   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:00.754142   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:00.754152   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.783733   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:00.783749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:00.842004   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:00.842021   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:00.852440   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:00.852455   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:00.914781   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:00.914792   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:00.914802   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.477311   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:03.488847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:03.488902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:03.517173   54219 cri.go:89] found id: ""
	I1212 19:57:03.517186   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.517194   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:03.517198   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:03.517266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:03.545723   54219 cri.go:89] found id: ""
	I1212 19:57:03.545737   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.545750   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:03.545755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:03.545812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:03.572600   54219 cri.go:89] found id: ""
	I1212 19:57:03.572614   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.572622   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:03.572626   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:03.572688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:03.597001   54219 cri.go:89] found id: ""
	I1212 19:57:03.597015   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.597026   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:03.597031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:03.597088   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:03.625021   54219 cri.go:89] found id: ""
	I1212 19:57:03.625034   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.625041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:03.625046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:03.625104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:03.653842   54219 cri.go:89] found id: ""
	I1212 19:57:03.653856   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.653864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:03.653869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:03.653926   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:03.677783   54219 cri.go:89] found id: ""
	I1212 19:57:03.677797   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.677804   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:03.677812   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:03.677822   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:03.736594   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:03.736617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:03.747247   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:03.747264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:03.809956   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:03.809965   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:03.809987   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.871011   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:03.871029   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.399328   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:06.409365   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:06.409423   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:06.433061   54219 cri.go:89] found id: ""
	I1212 19:57:06.433075   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.433082   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:06.433094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:06.433154   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:06.481872   54219 cri.go:89] found id: ""
	I1212 19:57:06.481886   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.481893   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:06.481898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:06.481954   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:06.510179   54219 cri.go:89] found id: ""
	I1212 19:57:06.510192   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.510200   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:06.510204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:06.510264   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:06.543022   54219 cri.go:89] found id: ""
	I1212 19:57:06.543036   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.543043   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:06.543048   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:06.543104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:06.570071   54219 cri.go:89] found id: ""
	I1212 19:57:06.570091   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.570100   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:06.570105   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:06.570170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:06.599741   54219 cri.go:89] found id: ""
	I1212 19:57:06.599754   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.599761   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:06.599779   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:06.599858   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:06.624514   54219 cri.go:89] found id: ""
	I1212 19:57:06.624528   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.624534   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:06.624542   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:06.624553   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:06.635592   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:06.635610   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:06.702713   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:06.702724   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:06.702734   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:06.765240   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:06.765258   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.793023   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:06.793039   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:09.351721   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:09.361738   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:09.361798   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:09.386854   54219 cri.go:89] found id: ""
	I1212 19:57:09.386867   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.386875   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:09.386880   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:09.386944   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:09.412114   54219 cri.go:89] found id: ""
	I1212 19:57:09.412127   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.412134   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:09.412139   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:09.412197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:09.449831   54219 cri.go:89] found id: ""
	I1212 19:57:09.449844   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.449854   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:09.449859   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:09.449913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:09.478096   54219 cri.go:89] found id: ""
	I1212 19:57:09.478109   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.478127   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:09.478133   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:09.478205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:09.509051   54219 cri.go:89] found id: ""
	I1212 19:57:09.509064   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.509072   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:09.509077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:09.509140   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:09.533239   54219 cri.go:89] found id: ""
	I1212 19:57:09.533253   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.533259   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:09.533265   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:09.533320   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:09.559093   54219 cri.go:89] found id: ""
	I1212 19:57:09.559108   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.559114   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:09.559122   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:09.559144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:09.569994   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:09.570010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:09.632936   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:09.632947   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:09.632957   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:09.694797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:09.694815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:09.723095   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:09.723124   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.279206   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:12.289157   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:12.289218   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:12.314051   54219 cri.go:89] found id: ""
	I1212 19:57:12.314065   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.314071   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:12.314077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:12.314146   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:12.338981   54219 cri.go:89] found id: ""
	I1212 19:57:12.338995   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.339002   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:12.339007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:12.339064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:12.364272   54219 cri.go:89] found id: ""
	I1212 19:57:12.364285   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.364294   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:12.364299   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:12.364356   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:12.388633   54219 cri.go:89] found id: ""
	I1212 19:57:12.388647   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.388654   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:12.388659   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:12.388717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:12.412315   54219 cri.go:89] found id: ""
	I1212 19:57:12.412330   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.412337   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:12.412342   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:12.412399   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:12.435919   54219 cri.go:89] found id: ""
	I1212 19:57:12.435932   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.435938   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:12.435944   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:12.436010   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:12.464586   54219 cri.go:89] found id: ""
	I1212 19:57:12.464600   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.464607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:12.464615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:12.464625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.531126   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:12.531144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:12.541720   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:12.541737   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:12.607440   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:12.607450   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:12.607460   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:12.669638   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:12.669657   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.197082   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:15.207136   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:15.207197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:15.232075   54219 cri.go:89] found id: ""
	I1212 19:57:15.232089   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.232095   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:15.232101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:15.232159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:15.256640   54219 cri.go:89] found id: ""
	I1212 19:57:15.256654   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.256661   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:15.256668   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:15.256725   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:15.281708   54219 cri.go:89] found id: ""
	I1212 19:57:15.281722   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.281729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:15.281751   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:15.281811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:15.306602   54219 cri.go:89] found id: ""
	I1212 19:57:15.306615   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.306622   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:15.306627   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:15.306683   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:15.330704   54219 cri.go:89] found id: ""
	I1212 19:57:15.330718   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.330724   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:15.330730   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:15.330788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:15.356237   54219 cri.go:89] found id: ""
	I1212 19:57:15.356251   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.356258   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:15.356263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:15.356322   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:15.384137   54219 cri.go:89] found id: ""
	I1212 19:57:15.384149   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.384155   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:15.384163   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:15.384174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:15.394815   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:15.394831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:15.464384   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:15.464402   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:15.464413   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:15.531093   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:15.531112   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.558272   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:15.558287   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.114881   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:18.124888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:18.124947   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:18.153733   54219 cri.go:89] found id: ""
	I1212 19:57:18.153747   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.153753   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:18.153758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:18.153819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:18.179987   54219 cri.go:89] found id: ""
	I1212 19:57:18.180001   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.180007   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:18.180012   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:18.180069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:18.208210   54219 cri.go:89] found id: ""
	I1212 19:57:18.208223   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.208230   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:18.208235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:18.208290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:18.240237   54219 cri.go:89] found id: ""
	I1212 19:57:18.240252   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.240258   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:18.240263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:18.240321   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:18.263335   54219 cri.go:89] found id: ""
	I1212 19:57:18.263349   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.263356   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:18.263361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:18.263416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:18.286920   54219 cri.go:89] found id: ""
	I1212 19:57:18.286933   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.286940   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:18.286945   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:18.286999   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:18.311040   54219 cri.go:89] found id: ""
	I1212 19:57:18.311053   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.311060   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:18.311068   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:18.311077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.366520   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:18.366538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:18.376885   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:18.376903   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:18.439989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:18.440010   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:18.440020   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:18.511364   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:18.511384   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:21.043380   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:21.053290   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:21.053345   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:21.077334   54219 cri.go:89] found id: ""
	I1212 19:57:21.077348   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.077355   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:21.077360   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:21.077424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:21.102108   54219 cri.go:89] found id: ""
	I1212 19:57:21.102122   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.102129   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:21.102141   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:21.102198   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:21.125941   54219 cri.go:89] found id: ""
	I1212 19:57:21.125955   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.125962   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:21.125967   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:21.126022   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:21.150198   54219 cri.go:89] found id: ""
	I1212 19:57:21.150211   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.150218   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:21.150229   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:21.150284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:21.177722   54219 cri.go:89] found id: ""
	I1212 19:57:21.177736   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.177743   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:21.177748   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:21.177806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:21.205490   54219 cri.go:89] found id: ""
	I1212 19:57:21.205504   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.205511   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:21.205516   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:21.205574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:21.230104   54219 cri.go:89] found id: ""
	I1212 19:57:21.230118   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.230125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:21.230132   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:21.230148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:21.286638   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:21.286655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:21.297043   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:21.297058   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:21.358837   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:21.358847   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:21.358858   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:21.425656   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:21.425676   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:23.965162   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:23.974936   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:23.975001   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:23.998921   54219 cri.go:89] found id: ""
	I1212 19:57:23.998935   54219 logs.go:282] 0 containers: []
	W1212 19:57:23.998942   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:23.998947   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:23.999007   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:24.028254   54219 cri.go:89] found id: ""
	I1212 19:57:24.028283   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.028291   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:24.028296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:24.028365   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:24.053461   54219 cri.go:89] found id: ""
	I1212 19:57:24.053475   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.053482   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:24.053487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:24.053546   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:24.082160   54219 cri.go:89] found id: ""
	I1212 19:57:24.082175   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.082182   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:24.082187   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:24.082247   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:24.111368   54219 cri.go:89] found id: ""
	I1212 19:57:24.111381   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.111388   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:24.111394   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:24.111452   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:24.139886   54219 cri.go:89] found id: ""
	I1212 19:57:24.139900   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.139907   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:24.139912   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:24.139966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:24.165622   54219 cri.go:89] found id: ""
	I1212 19:57:24.165636   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.165644   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:24.165652   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:24.165661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:24.223024   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:24.223042   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:24.234034   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:24.234049   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:24.300286   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:24.300298   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:24.300308   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:24.366297   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:24.366324   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:26.892882   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:26.903710   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:26.903767   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:26.928734   54219 cri.go:89] found id: ""
	I1212 19:57:26.928748   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.928754   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:26.928759   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:26.928815   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:26.951741   54219 cri.go:89] found id: ""
	I1212 19:57:26.951754   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.951760   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:26.951765   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:26.951820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:26.977319   54219 cri.go:89] found id: ""
	I1212 19:57:26.977332   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.977339   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:26.977343   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:26.977396   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:27.005917   54219 cri.go:89] found id: ""
	I1212 19:57:27.005931   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.005937   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:27.005942   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:27.005997   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:27.031546   54219 cri.go:89] found id: ""
	I1212 19:57:27.031561   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.031568   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:27.031573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:27.031630   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:27.055510   54219 cri.go:89] found id: ""
	I1212 19:57:27.055524   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.055530   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:27.055535   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:27.055593   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:27.083350   54219 cri.go:89] found id: ""
	I1212 19:57:27.083364   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.083370   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:27.083389   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:27.083400   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:27.111521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:27.111542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:27.166541   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:27.166558   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:27.177159   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:27.177174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:27.242522   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:27.242532   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:27.242542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:29.804626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:29.814577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:29.814643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:29.840378   54219 cri.go:89] found id: ""
	I1212 19:57:29.840391   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.840398   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:29.840403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:29.840462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:29.868144   54219 cri.go:89] found id: ""
	I1212 19:57:29.868157   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.868163   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:29.868168   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:29.868227   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:29.893720   54219 cri.go:89] found id: ""
	I1212 19:57:29.893734   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.893740   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:29.893745   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:29.893812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:29.922305   54219 cri.go:89] found id: ""
	I1212 19:57:29.922319   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.922326   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:29.922331   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:29.922386   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:29.946347   54219 cri.go:89] found id: ""
	I1212 19:57:29.946366   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.946373   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:29.946378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:29.946434   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:29.971074   54219 cri.go:89] found id: ""
	I1212 19:57:29.971087   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.971094   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:29.971099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:29.971158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:29.994674   54219 cri.go:89] found id: ""
	I1212 19:57:29.994697   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.994704   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:29.994712   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:29.994723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:30.005086   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:30.005108   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:30.083562   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:30.083572   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:30.083582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:30.146070   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:30.146089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:30.178521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:30.178538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:32.735968   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:32.746704   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:32.746766   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:32.773559   54219 cri.go:89] found id: ""
	I1212 19:57:32.773573   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.773579   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:32.773584   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:32.773647   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:32.796720   54219 cri.go:89] found id: ""
	I1212 19:57:32.796733   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.796749   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:32.796755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:32.796809   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:32.819740   54219 cri.go:89] found id: ""
	I1212 19:57:32.819754   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.819761   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:32.819766   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:32.819824   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:32.845383   54219 cri.go:89] found id: ""
	I1212 19:57:32.845396   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.845404   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:32.845409   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:32.845463   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:32.868404   54219 cri.go:89] found id: ""
	I1212 19:57:32.868417   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.868423   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:32.868428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:32.868482   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:32.893264   54219 cri.go:89] found id: ""
	I1212 19:57:32.893278   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.893284   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:32.893289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:32.893342   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:32.918080   54219 cri.go:89] found id: ""
	I1212 19:57:32.918103   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.918111   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:32.918124   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:32.918134   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:32.983660   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:32.983671   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:32.983682   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:33.050130   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:33.050155   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:33.077660   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:33.077675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:33.136010   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:33.136028   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.647123   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:35.656832   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:35.656887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:35.680780   54219 cri.go:89] found id: ""
	I1212 19:57:35.680793   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.680800   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:35.680805   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:35.680863   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:35.710149   54219 cri.go:89] found id: ""
	I1212 19:57:35.710163   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.710171   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:35.710175   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:35.710233   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:35.737709   54219 cri.go:89] found id: ""
	I1212 19:57:35.737722   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.737729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:35.737734   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:35.737788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:35.763960   54219 cri.go:89] found id: ""
	I1212 19:57:35.763974   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.763986   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:35.763991   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:35.764053   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:35.796697   54219 cri.go:89] found id: ""
	I1212 19:57:35.796710   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.796718   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:35.796722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:35.796782   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:35.820208   54219 cri.go:89] found id: ""
	I1212 19:57:35.820222   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.820229   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:35.820234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:35.820289   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:35.845107   54219 cri.go:89] found id: ""
	I1212 19:57:35.845121   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.845128   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:35.845135   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:35.845148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:35.904798   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:35.904816   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.915282   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:35.915297   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:35.980125   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:35.980135   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:35.980146   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:36.042456   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:36.042476   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:38.571541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:38.581597   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:38.581658   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:38.604774   54219 cri.go:89] found id: ""
	I1212 19:57:38.604787   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.604794   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:38.604799   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:38.604853   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:38.630065   54219 cri.go:89] found id: ""
	I1212 19:57:38.630079   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.630085   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:38.630090   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:38.630151   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:38.654890   54219 cri.go:89] found id: ""
	I1212 19:57:38.654903   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.654910   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:38.654915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:38.654970   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:38.682669   54219 cri.go:89] found id: ""
	I1212 19:57:38.682684   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.682691   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:38.682696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:38.682753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:38.728209   54219 cri.go:89] found id: ""
	I1212 19:57:38.728227   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.728244   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:38.728249   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:38.728317   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:38.757740   54219 cri.go:89] found id: ""
	I1212 19:57:38.757753   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.757768   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:38.757774   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:38.757829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:38.785300   54219 cri.go:89] found id: ""
	I1212 19:57:38.785314   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.785321   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:38.785328   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:38.785338   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:38.841797   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:38.841815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:38.852807   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:38.852823   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:38.918575   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:38.918585   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:38.918596   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:38.980647   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:38.980666   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:41.508125   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:41.518560   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:41.518620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:41.543483   54219 cri.go:89] found id: ""
	I1212 19:57:41.543497   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.543504   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:41.543509   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:41.543565   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:41.568460   54219 cri.go:89] found id: ""
	I1212 19:57:41.568474   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.568481   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:41.568485   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:41.568541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:41.592454   54219 cri.go:89] found id: ""
	I1212 19:57:41.592468   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.592475   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:41.592480   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:41.592537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:41.616514   54219 cri.go:89] found id: ""
	I1212 19:57:41.616528   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.616535   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:41.616540   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:41.616600   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:41.640661   54219 cri.go:89] found id: ""
	I1212 19:57:41.640675   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.640681   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:41.640686   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:41.640741   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:41.668228   54219 cri.go:89] found id: ""
	I1212 19:57:41.668241   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.668248   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:41.668254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:41.668315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:41.694010   54219 cri.go:89] found id: ""
	I1212 19:57:41.694023   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.694030   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:41.694048   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:41.694057   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:41.759133   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:41.759153   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:41.770184   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:41.770200   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:41.834777   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:41.834788   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:41.834798   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:41.896691   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:41.896709   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:44.424748   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:44.434763   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:44.434819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:44.458808   54219 cri.go:89] found id: ""
	I1212 19:57:44.458821   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.458833   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:44.458839   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:44.458895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:44.484932   54219 cri.go:89] found id: ""
	I1212 19:57:44.484945   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.484951   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:44.484956   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:44.485013   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:44.509964   54219 cri.go:89] found id: ""
	I1212 19:57:44.509978   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.509985   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:44.509990   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:44.510047   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:44.538212   54219 cri.go:89] found id: ""
	I1212 19:57:44.538226   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.538233   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:44.538239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:44.538295   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:44.563029   54219 cri.go:89] found id: ""
	I1212 19:57:44.563043   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.563050   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:44.563058   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:44.563116   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:44.594560   54219 cri.go:89] found id: ""
	I1212 19:57:44.594573   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.594580   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:44.594585   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:44.594648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:44.618882   54219 cri.go:89] found id: ""
	I1212 19:57:44.618896   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.618903   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:44.618910   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:44.618921   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:44.674635   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:44.674653   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:44.685377   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:44.685392   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:44.767577   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:44.767587   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:44.767599   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:44.830883   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:44.830901   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:47.361584   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:47.371608   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:47.371664   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:47.397902   54219 cri.go:89] found id: ""
	I1212 19:57:47.397915   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.397922   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:47.397927   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:47.397983   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:47.421839   54219 cri.go:89] found id: ""
	I1212 19:57:47.421852   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.421859   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:47.421864   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:47.421920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:47.444814   54219 cri.go:89] found id: ""
	I1212 19:57:47.444829   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.444836   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:47.444841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:47.444895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:47.470743   54219 cri.go:89] found id: ""
	I1212 19:57:47.470758   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.470765   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:47.470770   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:47.470829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:47.494189   54219 cri.go:89] found id: ""
	I1212 19:57:47.494202   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.494209   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:47.494214   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:47.494271   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:47.522490   54219 cri.go:89] found id: ""
	I1212 19:57:47.522504   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.522510   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:47.522515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:47.522573   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:47.546914   54219 cri.go:89] found id: ""
	I1212 19:57:47.546929   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.546938   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:47.546948   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:47.546960   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:47.602569   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:47.602586   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:47.613063   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:47.613077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:47.675404   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:47.675413   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:47.675424   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:47.744526   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:47.744545   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:50.275957   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:50.285985   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:50.286042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:50.310834   54219 cri.go:89] found id: ""
	I1212 19:57:50.310848   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.310855   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:50.310860   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:50.310915   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:50.335949   54219 cri.go:89] found id: ""
	I1212 19:57:50.335962   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.335969   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:50.335973   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:50.336042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:50.361218   54219 cri.go:89] found id: ""
	I1212 19:57:50.361233   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.361239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:50.361244   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:50.361302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:50.389990   54219 cri.go:89] found id: ""
	I1212 19:57:50.390004   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.390011   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:50.390016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:50.390070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:50.414872   54219 cri.go:89] found id: ""
	I1212 19:57:50.414886   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.414893   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:50.414898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:50.414957   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:50.439081   54219 cri.go:89] found id: ""
	I1212 19:57:50.439094   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.439102   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:50.439106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:50.439162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:50.463124   54219 cri.go:89] found id: ""
	I1212 19:57:50.463137   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.463144   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:50.463151   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:50.463160   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:50.519197   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:50.519217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:50.529678   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:50.529697   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:50.593926   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:50.593936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:50.593946   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:50.663627   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:50.663647   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.195155   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:53.205007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:53.205065   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:53.228910   54219 cri.go:89] found id: ""
	I1212 19:57:53.228924   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.228930   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:53.228935   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:53.228992   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:53.256269   54219 cri.go:89] found id: ""
	I1212 19:57:53.256282   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.256289   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:53.256294   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:53.256363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:53.279490   54219 cri.go:89] found id: ""
	I1212 19:57:53.279505   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.279512   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:53.279517   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:53.279575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:53.303201   54219 cri.go:89] found id: ""
	I1212 19:57:53.303215   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.303222   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:53.303227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:53.303285   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:53.331320   54219 cri.go:89] found id: ""
	I1212 19:57:53.331333   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.331349   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:53.331354   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:53.331424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:53.355603   54219 cri.go:89] found id: ""
	I1212 19:57:53.355617   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.355624   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:53.355629   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:53.355685   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:53.380364   54219 cri.go:89] found id: ""
	I1212 19:57:53.380378   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.380385   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:53.380394   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:53.380405   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:53.448989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:53.449000   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:53.449010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:53.516879   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:53.516908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.550642   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:53.550661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:53.608676   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:53.608694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.120012   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:56.129790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:56.129852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:56.154949   54219 cri.go:89] found id: ""
	I1212 19:57:56.154963   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.154969   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:56.154974   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:56.155029   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:56.178218   54219 cri.go:89] found id: ""
	I1212 19:57:56.178232   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.178240   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:56.178254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:56.178311   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:56.202037   54219 cri.go:89] found id: ""
	I1212 19:57:56.202053   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.202060   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:56.202065   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:56.202127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:56.226077   54219 cri.go:89] found id: ""
	I1212 19:57:56.226106   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.226114   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:56.226120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:56.226183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:56.249790   54219 cri.go:89] found id: ""
	I1212 19:57:56.249803   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.249810   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:56.249815   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:56.249868   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:56.273767   54219 cri.go:89] found id: ""
	I1212 19:57:56.273780   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.273787   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:56.273793   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:56.273851   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:56.301574   54219 cri.go:89] found id: ""
	I1212 19:57:56.301587   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.301594   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:56.301602   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:56.301612   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:56.362705   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:56.362723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.373142   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:56.373166   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:56.434197   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:56.434207   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:56.434217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:56.497280   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:56.497298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:59.029935   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:59.040115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:59.040173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:59.064443   54219 cri.go:89] found id: ""
	I1212 19:57:59.064458   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.064465   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:59.064470   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:59.064525   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:59.089160   54219 cri.go:89] found id: ""
	I1212 19:57:59.089173   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.089180   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:59.089185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:59.089250   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:59.113771   54219 cri.go:89] found id: ""
	I1212 19:57:59.113785   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.113792   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:59.113797   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:59.113852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:59.141148   54219 cri.go:89] found id: ""
	I1212 19:57:59.141162   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.141169   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:59.141174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:59.141241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:59.163991   54219 cri.go:89] found id: ""
	I1212 19:57:59.164005   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.164011   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:59.164016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:59.164076   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:59.189011   54219 cri.go:89] found id: ""
	I1212 19:57:59.189026   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.189033   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:59.189038   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:59.189092   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:59.213106   54219 cri.go:89] found id: ""
	I1212 19:57:59.213119   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.213125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:59.213133   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:59.213143   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:59.268036   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:59.268054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:59.278468   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:59.278483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:59.343881   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:59.343891   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:59.343909   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:59.406439   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:59.406457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:01.935967   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:01.947272   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:01.947331   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:01.980222   54219 cri.go:89] found id: ""
	I1212 19:58:01.980235   54219 logs.go:282] 0 containers: []
	W1212 19:58:01.980251   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:01.980257   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:01.980314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:02.009777   54219 cri.go:89] found id: ""
	I1212 19:58:02.009794   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.009802   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:02.009808   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:02.009899   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:02.042576   54219 cri.go:89] found id: ""
	I1212 19:58:02.042591   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.042598   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:02.042603   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:02.042680   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:02.067370   54219 cri.go:89] found id: ""
	I1212 19:58:02.067384   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.067392   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:02.067397   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:02.067462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:02.096410   54219 cri.go:89] found id: ""
	I1212 19:58:02.096423   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.096430   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:02.096436   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:02.096495   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:02.120186   54219 cri.go:89] found id: ""
	I1212 19:58:02.120200   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.120207   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:02.120212   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:02.120272   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:02.146219   54219 cri.go:89] found id: ""
	I1212 19:58:02.146233   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.146240   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:02.146264   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:02.146274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:02.203137   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:02.203156   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:02.214269   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:02.214290   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:02.282468   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:02.282477   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:02.282490   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:02.345078   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:02.345096   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:04.874398   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:04.884418   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:04.884477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:04.913524   54219 cri.go:89] found id: ""
	I1212 19:58:04.913537   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.913544   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:04.913596   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:04.913656   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:04.941905   54219 cri.go:89] found id: ""
	I1212 19:58:04.941919   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.941925   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:04.941930   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:04.941988   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:04.969529   54219 cri.go:89] found id: ""
	I1212 19:58:04.969549   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.969556   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:04.969561   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:04.969619   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:04.998159   54219 cri.go:89] found id: ""
	I1212 19:58:04.998173   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.998180   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:04.998185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:04.998241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:05.027027   54219 cri.go:89] found id: ""
	I1212 19:58:05.027042   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.027052   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:05.027057   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:05.027159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:05.053821   54219 cri.go:89] found id: ""
	I1212 19:58:05.053834   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.053841   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:05.053847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:05.053903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:05.078817   54219 cri.go:89] found id: ""
	I1212 19:58:05.078831   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.078837   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:05.078845   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:05.078856   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:05.137908   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:05.137927   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:05.149843   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:05.149859   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:05.216435   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:05.216444   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:05.216454   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:05.281451   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:05.281469   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:07.809177   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:07.819079   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:07.819135   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:07.843677   54219 cri.go:89] found id: ""
	I1212 19:58:07.843691   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.843698   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:07.843703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:07.843763   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:07.873172   54219 cri.go:89] found id: ""
	I1212 19:58:07.873185   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.873192   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:07.873197   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:07.873251   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:07.898060   54219 cri.go:89] found id: ""
	I1212 19:58:07.898082   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.898090   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:07.898099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:07.898157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:07.922099   54219 cri.go:89] found id: ""
	I1212 19:58:07.922113   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.922120   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:07.922131   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:07.922186   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:07.951267   54219 cri.go:89] found id: ""
	I1212 19:58:07.951281   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.951287   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:07.951292   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:07.951350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:07.979301   54219 cri.go:89] found id: ""
	I1212 19:58:07.979315   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.979322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:07.979327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:07.979383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:08.016405   54219 cri.go:89] found id: ""
	I1212 19:58:08.016418   54219 logs.go:282] 0 containers: []
	W1212 19:58:08.016425   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:08.016433   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:08.016444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:08.027858   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:08.027875   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:08.095861   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:08.095872   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:08.095885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:08.159001   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:08.159019   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:08.186794   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:08.186812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.744419   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:10.755144   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:10.755202   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:10.778581   54219 cri.go:89] found id: ""
	I1212 19:58:10.778594   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.778601   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:10.778607   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:10.778663   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:10.802768   54219 cri.go:89] found id: ""
	I1212 19:58:10.802781   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.802787   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:10.802792   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:10.802850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:10.828295   54219 cri.go:89] found id: ""
	I1212 19:58:10.828309   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.828316   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:10.828321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:10.828374   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:10.851350   54219 cri.go:89] found id: ""
	I1212 19:58:10.851363   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.851370   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:10.851375   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:10.851429   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:10.879621   54219 cri.go:89] found id: ""
	I1212 19:58:10.879635   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.879641   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:10.879646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:10.879700   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:10.905108   54219 cri.go:89] found id: ""
	I1212 19:58:10.905122   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.905129   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:10.905134   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:10.905191   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:10.928365   54219 cri.go:89] found id: ""
	I1212 19:58:10.928379   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.928386   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:10.928394   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:10.928418   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.986372   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:10.986390   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:10.997450   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:10.997464   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:11.067488   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:11.067499   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:11.067510   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:11.131069   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:11.131089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:13.660595   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:13.670703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:13.670762   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:13.694211   54219 cri.go:89] found id: ""
	I1212 19:58:13.694224   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.694231   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:13.694236   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:13.694291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:13.724541   54219 cri.go:89] found id: ""
	I1212 19:58:13.724554   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.724561   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:13.724566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:13.724625   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:13.750194   54219 cri.go:89] found id: ""
	I1212 19:58:13.750207   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.750214   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:13.750219   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:13.750277   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:13.774257   54219 cri.go:89] found id: ""
	I1212 19:58:13.774271   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.774278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:13.774283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:13.774338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:13.799078   54219 cri.go:89] found id: ""
	I1212 19:58:13.799091   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.799097   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:13.799102   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:13.799158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:13.822710   54219 cri.go:89] found id: ""
	I1212 19:58:13.822724   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.822730   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:13.822735   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:13.822791   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:13.849556   54219 cri.go:89] found id: ""
	I1212 19:58:13.849570   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.849576   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:13.849584   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:13.849595   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:13.907383   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:13.907403   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:13.917866   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:13.917883   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:14.000449   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:14.000458   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:14.000477   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:14.066367   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:14.066386   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:16.594682   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:16.604845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:16.604903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:16.629471   54219 cri.go:89] found id: ""
	I1212 19:58:16.629485   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.629493   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:16.629498   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:16.629554   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:16.654890   54219 cri.go:89] found id: ""
	I1212 19:58:16.654904   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.654911   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:16.654916   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:16.654981   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:16.679283   54219 cri.go:89] found id: ""
	I1212 19:58:16.679297   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.679304   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:16.679309   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:16.679362   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:16.704043   54219 cri.go:89] found id: ""
	I1212 19:58:16.704057   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.704065   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:16.704070   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:16.704127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:16.728139   54219 cri.go:89] found id: ""
	I1212 19:58:16.728153   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.728159   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:16.728164   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:16.728225   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:16.757814   54219 cri.go:89] found id: ""
	I1212 19:58:16.757829   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.757836   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:16.757841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:16.757894   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:16.782420   54219 cri.go:89] found id: ""
	I1212 19:58:16.782433   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.782441   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:16.782448   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:16.782458   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:16.841763   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:16.841780   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:16.852845   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:16.852861   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:16.920551   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:16.920561   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:16.920572   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:16.986769   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:16.986788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:19.527987   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:19.537931   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:19.537994   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:19.561363   54219 cri.go:89] found id: ""
	I1212 19:58:19.561377   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.561383   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:19.561389   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:19.561444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:19.584696   54219 cri.go:89] found id: ""
	I1212 19:58:19.584710   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.584717   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:19.584722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:19.584783   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:19.608796   54219 cri.go:89] found id: ""
	I1212 19:58:19.608816   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.608829   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:19.608834   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:19.608888   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:19.633676   54219 cri.go:89] found id: ""
	I1212 19:58:19.633690   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.633697   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:19.633702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:19.633765   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:19.656537   54219 cri.go:89] found id: ""
	I1212 19:58:19.656550   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.656557   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:19.656562   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:19.656615   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:19.681676   54219 cri.go:89] found id: ""
	I1212 19:58:19.681689   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.681696   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:19.681701   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:19.681756   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:19.704747   54219 cri.go:89] found id: ""
	I1212 19:58:19.704761   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.704768   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:19.704775   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:19.704785   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:19.760344   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:19.760360   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:19.770729   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:19.770745   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:19.834442   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:19.834452   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:19.834462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:19.897417   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:19.897437   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:22.424308   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:22.434481   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:22.434537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:22.458765   54219 cri.go:89] found id: ""
	I1212 19:58:22.458778   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.458785   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:22.458790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:22.458844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:22.486364   54219 cri.go:89] found id: ""
	I1212 19:58:22.486378   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.486385   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:22.486403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:22.486469   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:22.518554   54219 cri.go:89] found id: ""
	I1212 19:58:22.518567   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.518575   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:22.518579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:22.518648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:22.543164   54219 cri.go:89] found id: ""
	I1212 19:58:22.543178   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.543185   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:22.543190   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:22.543266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:22.567677   54219 cri.go:89] found id: ""
	I1212 19:58:22.567691   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.567697   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:22.567702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:22.567757   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:22.594216   54219 cri.go:89] found id: ""
	I1212 19:58:22.594230   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.594237   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:22.594242   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:22.594310   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:22.622007   54219 cri.go:89] found id: ""
	I1212 19:58:22.622021   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.622028   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:22.622036   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:22.622046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:22.684696   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:22.684719   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:22.696409   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:22.696425   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:22.763719   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:22.763730   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:22.763742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:22.828220   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:22.828242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.355355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:25.367957   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:25.368041   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:25.394847   54219 cri.go:89] found id: ""
	I1212 19:58:25.394861   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.394868   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:25.394873   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:25.394928   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:25.419394   54219 cri.go:89] found id: ""
	I1212 19:58:25.419408   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.419414   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:25.419419   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:25.419477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:25.444373   54219 cri.go:89] found id: ""
	I1212 19:58:25.444386   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.444393   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:25.444398   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:25.444455   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:25.467872   54219 cri.go:89] found id: ""
	I1212 19:58:25.467886   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.467892   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:25.467897   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:25.467952   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:25.491493   54219 cri.go:89] found id: ""
	I1212 19:58:25.491507   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.491514   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:25.491519   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:25.491575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:25.515809   54219 cri.go:89] found id: ""
	I1212 19:58:25.515832   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.515864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:25.515869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:25.515939   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:25.540733   54219 cri.go:89] found id: ""
	I1212 19:58:25.540747   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.540754   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:25.540762   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:25.540773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:25.551372   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:25.551387   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:25.613099   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:25.613109   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:25.613119   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:25.674835   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:25.674854   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.702894   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:25.702910   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.260731   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:28.270423   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:28.270480   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:28.297804   54219 cri.go:89] found id: ""
	I1212 19:58:28.297818   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.297825   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:28.297830   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:28.297887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:28.322143   54219 cri.go:89] found id: ""
	I1212 19:58:28.322157   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.322164   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:28.322169   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:28.322223   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:28.346215   54219 cri.go:89] found id: ""
	I1212 19:58:28.346229   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.346236   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:28.346241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:28.346297   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:28.370542   54219 cri.go:89] found id: ""
	I1212 19:58:28.370556   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.370563   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:28.370574   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:28.370634   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:28.397655   54219 cri.go:89] found id: ""
	I1212 19:58:28.397670   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.397677   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:28.397682   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:28.397737   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:28.421548   54219 cri.go:89] found id: ""
	I1212 19:58:28.421561   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.421568   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:28.421573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:28.421627   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:28.445812   54219 cri.go:89] found id: ""
	I1212 19:58:28.445826   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.445833   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:28.445840   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:28.445850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.501608   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:28.501625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:28.513441   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:28.513494   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:28.582207   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:28.582217   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:28.582229   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:28.644833   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:28.644850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:31.174256   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:31.184503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:31.184561   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:31.220119   54219 cri.go:89] found id: ""
	I1212 19:58:31.220139   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.220147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:31.220158   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:31.220226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:31.253789   54219 cri.go:89] found id: ""
	I1212 19:58:31.253802   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.253815   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:31.253825   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:31.253884   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:31.279880   54219 cri.go:89] found id: ""
	I1212 19:58:31.279899   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.279906   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:31.279911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:31.279965   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:31.304491   54219 cri.go:89] found id: ""
	I1212 19:58:31.304504   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.304511   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:31.304515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:31.304569   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:31.331430   54219 cri.go:89] found id: ""
	I1212 19:58:31.331444   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.331451   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:31.331456   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:31.331510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:31.357552   54219 cri.go:89] found id: ""
	I1212 19:58:31.357566   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.357572   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:31.357577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:31.357633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:31.381902   54219 cri.go:89] found id: ""
	I1212 19:58:31.381916   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.381923   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:31.381930   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:31.381940   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:31.437813   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:31.437831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:31.448492   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:31.448509   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:31.513035   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:31.513045   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:31.513056   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:31.574565   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:31.574584   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:34.102253   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:34.112554   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:34.112620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:34.137461   54219 cri.go:89] found id: ""
	I1212 19:58:34.137475   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.137482   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:34.137487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:34.137541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:34.166138   54219 cri.go:89] found id: ""
	I1212 19:58:34.166161   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.166169   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:34.166174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:34.166234   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:34.193829   54219 cri.go:89] found id: ""
	I1212 19:58:34.193842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.193849   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:34.193854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:34.193906   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:34.231695   54219 cri.go:89] found id: ""
	I1212 19:58:34.231708   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.231716   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:34.231721   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:34.231777   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:34.264331   54219 cri.go:89] found id: ""
	I1212 19:58:34.264344   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.264351   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:34.264356   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:34.264412   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:34.288829   54219 cri.go:89] found id: ""
	I1212 19:58:34.288842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.288849   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:34.288854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:34.288908   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:34.316442   54219 cri.go:89] found id: ""
	I1212 19:58:34.316456   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.316463   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:34.316471   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:34.316481   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:34.376058   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:34.376076   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:34.386998   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:34.387013   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:34.452379   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:34.452390   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:34.452401   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:34.514653   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:34.514671   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:37.042798   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:37.053097   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:37.053156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:37.076591   54219 cri.go:89] found id: ""
	I1212 19:58:37.076604   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.076611   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:37.076616   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:37.076674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:37.099322   54219 cri.go:89] found id: ""
	I1212 19:58:37.099335   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.099342   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:37.099348   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:37.099402   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:37.123234   54219 cri.go:89] found id: ""
	I1212 19:58:37.123248   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.123255   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:37.123260   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:37.123314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:37.147746   54219 cri.go:89] found id: ""
	I1212 19:58:37.147760   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.147767   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:37.147772   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:37.147827   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:37.173059   54219 cri.go:89] found id: ""
	I1212 19:58:37.173072   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.173079   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:37.173084   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:37.173141   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:37.208173   54219 cri.go:89] found id: ""
	I1212 19:58:37.208192   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.208199   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:37.208204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:37.208263   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:37.239049   54219 cri.go:89] found id: ""
	I1212 19:58:37.239063   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.239070   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:37.239078   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:37.239088   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:37.297849   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:37.297866   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:37.309078   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:37.309092   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:37.375029   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:37.375038   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:37.375050   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:37.436797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:37.436815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:39.970179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:39.980227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:39.980293   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:40.004882   54219 cri.go:89] found id: ""
	I1212 19:58:40.004896   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.004903   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:40.004907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:40.004963   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:40.066617   54219 cri.go:89] found id: ""
	I1212 19:58:40.066632   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.066640   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:40.066645   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:40.066717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:40.102654   54219 cri.go:89] found id: ""
	I1212 19:58:40.102669   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.102676   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:40.102681   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:40.102745   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:40.133625   54219 cri.go:89] found id: ""
	I1212 19:58:40.133640   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.133648   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:40.133654   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:40.133723   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:40.166821   54219 cri.go:89] found id: ""
	I1212 19:58:40.166845   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.166853   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:40.166858   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:40.166927   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:40.195477   54219 cri.go:89] found id: ""
	I1212 19:58:40.195500   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.195509   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:40.195515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:40.195580   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:40.225935   54219 cri.go:89] found id: ""
	I1212 19:58:40.225949   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.225967   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:40.225976   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:40.225986   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:40.302829   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:40.302839   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:40.302850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:40.365532   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:40.365552   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:40.400282   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:40.400298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:40.460370   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:40.460389   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:42.971593   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:42.981866   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:42.981931   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:43.006660   54219 cri.go:89] found id: ""
	I1212 19:58:43.006674   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.006690   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:43.006696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:43.006753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:43.033557   54219 cri.go:89] found id: ""
	I1212 19:58:43.033571   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.033578   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:43.033583   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:43.033643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:43.062054   54219 cri.go:89] found id: ""
	I1212 19:58:43.062067   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.062073   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:43.062078   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:43.062139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:43.086826   54219 cri.go:89] found id: ""
	I1212 19:58:43.086841   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.086849   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:43.086854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:43.086920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:43.112001   54219 cri.go:89] found id: ""
	I1212 19:58:43.112015   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.112022   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:43.112027   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:43.112099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:43.137727   54219 cri.go:89] found id: ""
	I1212 19:58:43.137741   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.137748   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:43.137753   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:43.137811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:43.163693   54219 cri.go:89] found id: ""
	I1212 19:58:43.163707   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.163714   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:43.163731   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:43.163742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:43.174602   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:43.174617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:43.254196   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:43.254213   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:43.254224   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:43.321187   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:43.321206   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:43.353090   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:43.353105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:45.910450   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:45.920312   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:45.920373   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:45.942607   54219 cri.go:89] found id: ""
	I1212 19:58:45.942620   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.942627   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:45.942632   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:45.942688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:45.966155   54219 cri.go:89] found id: ""
	I1212 19:58:45.966168   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.966175   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:45.966179   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:45.966235   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:45.989218   54219 cri.go:89] found id: ""
	I1212 19:58:45.989232   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.989239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:45.989243   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:45.989298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:46.016207   54219 cri.go:89] found id: ""
	I1212 19:58:46.016222   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.016228   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:46.016234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:46.016291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:46.045554   54219 cri.go:89] found id: ""
	I1212 19:58:46.045569   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.045576   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:46.045581   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:46.045635   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:46.069843   54219 cri.go:89] found id: ""
	I1212 19:58:46.069856   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.069865   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:46.069870   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:46.069924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:46.093840   54219 cri.go:89] found id: ""
	I1212 19:58:46.093854   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.093860   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:46.093869   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:46.093878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:46.149331   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:46.149349   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:46.159907   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:46.159924   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:46.230481   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:46.230490   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:46.230502   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:46.300039   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:46.300060   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:48.829920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:48.840025   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:48.840080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:48.869553   54219 cri.go:89] found id: ""
	I1212 19:58:48.869567   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.869574   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:48.869579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:48.869633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:48.894185   54219 cri.go:89] found id: ""
	I1212 19:58:48.894199   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.894205   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:48.894220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:48.894280   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:48.918726   54219 cri.go:89] found id: ""
	I1212 19:58:48.918740   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.918752   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:48.918757   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:48.918814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:48.943092   54219 cri.go:89] found id: ""
	I1212 19:58:48.943106   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.943113   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:48.943118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:48.943172   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:48.967616   54219 cri.go:89] found id: ""
	I1212 19:58:48.967630   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.967637   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:48.967642   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:48.967697   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:48.993271   54219 cri.go:89] found id: ""
	I1212 19:58:48.993284   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.993291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:48.993296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:48.993355   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:49.018337   54219 cri.go:89] found id: ""
	I1212 19:58:49.018359   54219 logs.go:282] 0 containers: []
	W1212 19:58:49.018376   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:49.018386   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:49.018395   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:49.074620   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:49.074637   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:49.085360   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:49.085378   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:49.147253   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:49.147263   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:49.147274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:49.215977   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:49.215996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:51.751688   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:51.761744   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:51.761806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:51.790227   54219 cri.go:89] found id: ""
	I1212 19:58:51.790241   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.790248   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:51.790253   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:51.790309   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:51.817250   54219 cri.go:89] found id: ""
	I1212 19:58:51.817264   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.817271   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:51.817276   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:51.817346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:51.842832   54219 cri.go:89] found id: ""
	I1212 19:58:51.842845   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.842851   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:51.842856   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:51.842916   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:51.867234   54219 cri.go:89] found id: ""
	I1212 19:58:51.867249   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.867256   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:51.867261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:51.867315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:51.895349   54219 cri.go:89] found id: ""
	I1212 19:58:51.895364   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.895371   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:51.895376   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:51.895432   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:51.920577   54219 cri.go:89] found id: ""
	I1212 19:58:51.920594   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.920603   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:51.920612   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:51.920674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:51.945231   54219 cri.go:89] found id: ""
	I1212 19:58:51.945244   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.945251   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:51.945258   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:51.945268   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:52.004677   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:52.004694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:52.018082   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:52.018098   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:52.085848   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:52.085859   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:52.085869   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:52.155168   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:52.155196   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.685430   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:54.695280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:54.695335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:54.720974   54219 cri.go:89] found id: ""
	I1212 19:58:54.720988   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.720994   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:54.721001   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:54.721063   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:54.744863   54219 cri.go:89] found id: ""
	I1212 19:58:54.744876   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.744883   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:54.744888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:54.744943   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:54.768441   54219 cri.go:89] found id: ""
	I1212 19:58:54.768454   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.768461   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:54.768465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:54.768520   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:54.797540   54219 cri.go:89] found id: ""
	I1212 19:58:54.797554   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.797561   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:54.797566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:54.797633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:54.825756   54219 cri.go:89] found id: ""
	I1212 19:58:54.825770   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.825776   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:54.825782   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:54.825850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:54.853837   54219 cri.go:89] found id: ""
	I1212 19:58:54.853850   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.853857   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:54.853867   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:54.853921   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:54.880854   54219 cri.go:89] found id: ""
	I1212 19:58:54.880868   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.880874   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:54.880882   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:54.880892   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.908639   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:54.908655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:54.965093   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:54.965111   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:54.976121   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:54.976137   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:55.044063   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:55.044074   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:55.044085   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:57.606891   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:57.617246   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:57.617305   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:57.641248   54219 cri.go:89] found id: ""
	I1212 19:58:57.641261   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.641269   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:57.641274   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:57.641336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:57.666129   54219 cri.go:89] found id: ""
	I1212 19:58:57.666160   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.666167   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:57.666171   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:57.666226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:57.690889   54219 cri.go:89] found id: ""
	I1212 19:58:57.690902   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.690913   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:57.690918   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:57.690974   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:57.719998   54219 cri.go:89] found id: ""
	I1212 19:58:57.720012   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.720019   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:57.720024   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:57.720080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:57.745021   54219 cri.go:89] found id: ""
	I1212 19:58:57.745034   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.745041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:57.745046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:57.745102   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:57.769302   54219 cri.go:89] found id: ""
	I1212 19:58:57.769316   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.769322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:57.769327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:57.769383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:57.792874   54219 cri.go:89] found id: ""
	I1212 19:58:57.792887   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.792894   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:57.792902   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:57.792913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:57.821987   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:57.822003   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:57.878403   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:57.878420   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:57.889240   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:57.889255   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:57.955924   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:57.955936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:57.955948   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.519976   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:00.530412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:00.530471   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:00.562296   54219 cri.go:89] found id: ""
	I1212 19:59:00.562309   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.562316   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:00.562321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:00.562381   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:00.590126   54219 cri.go:89] found id: ""
	I1212 19:59:00.590140   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.590147   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:00.590152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:00.590208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:00.618262   54219 cri.go:89] found id: ""
	I1212 19:59:00.618276   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.618282   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:00.618287   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:00.618350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:00.643416   54219 cri.go:89] found id: ""
	I1212 19:59:00.643430   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.643437   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:00.643442   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:00.643497   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:00.668447   54219 cri.go:89] found id: ""
	I1212 19:59:00.668461   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.668469   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:00.668474   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:00.668534   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:00.695735   54219 cri.go:89] found id: ""
	I1212 19:59:00.695748   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.695755   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:00.695760   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:00.695820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:00.729197   54219 cri.go:89] found id: ""
	I1212 19:59:00.729211   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.729219   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:00.729226   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:00.729237   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:00.739980   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:00.739996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:00.812904   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:00.812914   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:00.812925   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.876760   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:00.876778   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:00.905954   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:00.905970   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.466026   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:03.476441   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:03.476505   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:03.512755   54219 cri.go:89] found id: ""
	I1212 19:59:03.512774   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.512781   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:03.512786   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:03.512844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:03.536972   54219 cri.go:89] found id: ""
	I1212 19:59:03.536992   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.536999   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:03.537004   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:03.537071   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:03.564981   54219 cri.go:89] found id: ""
	I1212 19:59:03.564995   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.565002   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:03.565006   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:03.565061   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:03.589258   54219 cri.go:89] found id: ""
	I1212 19:59:03.589271   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.589278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:03.589283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:03.589335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:03.617627   54219 cri.go:89] found id: ""
	I1212 19:59:03.617649   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.617656   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:03.617661   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:03.617724   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:03.643124   54219 cri.go:89] found id: ""
	I1212 19:59:03.643137   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.643144   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:03.643149   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:03.643205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:03.667587   54219 cri.go:89] found id: ""
	I1212 19:59:03.667601   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.667607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:03.667615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:03.667624   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.724310   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:03.724326   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:03.735089   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:03.735105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:03.799034   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:03.799043   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:03.799054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:03.861867   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:03.861885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:06.393541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:06.403453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:06.403511   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:06.427440   54219 cri.go:89] found id: ""
	I1212 19:59:06.427454   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.427460   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:06.427465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:06.427524   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:06.457341   54219 cri.go:89] found id: ""
	I1212 19:59:06.457355   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.457361   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:06.457366   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:06.457424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:06.495095   54219 cri.go:89] found id: ""
	I1212 19:59:06.495110   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.495116   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:06.495122   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:06.495179   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:06.522006   54219 cri.go:89] found id: ""
	I1212 19:59:06.522041   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.522048   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:06.522053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:06.522111   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:06.551005   54219 cri.go:89] found id: ""
	I1212 19:59:06.551019   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.551026   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:06.551031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:06.551099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:06.576063   54219 cri.go:89] found id: ""
	I1212 19:59:06.576089   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.576096   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:06.576101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:06.576157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:06.601543   54219 cri.go:89] found id: ""
	I1212 19:59:06.601557   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.601565   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:06.601572   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:06.601582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:06.657957   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:06.657977   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:06.668650   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:06.668665   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:06.730730   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:06.730739   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:06.730749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:06.793201   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:06.793219   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:09.321790   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:09.332762   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:09.332820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:09.359927   54219 cri.go:89] found id: ""
	I1212 19:59:09.359941   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.359948   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:09.359953   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:09.360026   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:09.385111   54219 cri.go:89] found id: ""
	I1212 19:59:09.385125   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.385137   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:09.385142   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:09.385201   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:09.416991   54219 cri.go:89] found id: ""
	I1212 19:59:09.417006   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.417013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:09.417018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:09.417077   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:09.442593   54219 cri.go:89] found id: ""
	I1212 19:59:09.442606   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.442612   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:09.442617   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:09.442672   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:09.469724   54219 cri.go:89] found id: ""
	I1212 19:59:09.469738   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.469745   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:09.469750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:09.469806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:09.506134   54219 cri.go:89] found id: ""
	I1212 19:59:09.506148   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.506154   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:09.506160   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:09.506226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:09.537548   54219 cri.go:89] found id: ""
	I1212 19:59:09.537561   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.537568   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:09.537576   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:09.537585   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:09.596110   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:09.596128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:09.607356   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:09.607373   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:09.678885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:09.678895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:09.678906   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:09.744120   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:09.744138   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:12.273229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:12.283400   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:12.283456   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:12.307126   54219 cri.go:89] found id: ""
	I1212 19:59:12.307140   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.307147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:12.307152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:12.307208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:12.333237   54219 cri.go:89] found id: ""
	I1212 19:59:12.333250   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.333257   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:12.333261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:12.333318   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:12.357336   54219 cri.go:89] found id: ""
	I1212 19:59:12.357349   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.357356   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:12.357361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:12.357416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:12.382066   54219 cri.go:89] found id: ""
	I1212 19:59:12.382080   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.382086   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:12.382091   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:12.382147   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:12.406069   54219 cri.go:89] found id: ""
	I1212 19:59:12.406082   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.406089   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:12.406094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:12.406149   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:12.434345   54219 cri.go:89] found id: ""
	I1212 19:59:12.434365   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.434372   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:12.434377   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:12.434457   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:12.466422   54219 cri.go:89] found id: ""
	I1212 19:59:12.466436   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.466444   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:12.466451   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:12.466462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:12.528768   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:12.528787   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:12.541490   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:12.541508   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:12.602589   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:12.602599   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:12.602609   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:12.664894   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:12.664913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:15.192235   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:15.202664   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:15.202722   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:15.227464   54219 cri.go:89] found id: ""
	I1212 19:59:15.227477   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.227484   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:15.227489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:15.227545   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:15.251075   54219 cri.go:89] found id: ""
	I1212 19:59:15.251089   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.251096   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:15.251101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:15.251156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:15.275993   54219 cri.go:89] found id: ""
	I1212 19:59:15.276006   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.276013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:15.276018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:15.276075   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:15.299883   54219 cri.go:89] found id: ""
	I1212 19:59:15.299896   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.299903   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:15.299908   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:15.299961   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:15.324623   54219 cri.go:89] found id: ""
	I1212 19:59:15.324636   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.324642   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:15.324647   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:15.324702   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:15.350461   54219 cri.go:89] found id: ""
	I1212 19:59:15.350474   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.350481   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:15.350486   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:15.350541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:15.375380   54219 cri.go:89] found id: ""
	I1212 19:59:15.375407   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.375415   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:15.375423   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:15.375434   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:15.431649   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:15.431669   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:15.444811   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:15.444836   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:15.537885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:15.537895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:15.537908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:15.604300   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:15.604319   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:18.136615   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:18.146971   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:18.147036   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:18.176330   54219 cri.go:89] found id: ""
	I1212 19:59:18.176344   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.176351   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:18.176359   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:18.176416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:18.200844   54219 cri.go:89] found id: ""
	I1212 19:59:18.200857   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.200863   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:18.200868   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:18.200924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:18.224026   54219 cri.go:89] found id: ""
	I1212 19:59:18.224040   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.224046   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:18.224051   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:18.224107   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:18.252073   54219 cri.go:89] found id: ""
	I1212 19:59:18.252086   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.252093   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:18.252098   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:18.252153   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:18.277440   54219 cri.go:89] found id: ""
	I1212 19:59:18.277454   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.277460   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:18.277465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:18.277521   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:18.302183   54219 cri.go:89] found id: ""
	I1212 19:59:18.302197   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.302214   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:18.302220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:18.302286   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:18.326037   54219 cri.go:89] found id: ""
	I1212 19:59:18.326058   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.326065   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:18.326073   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:18.326083   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:18.380825   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:18.380843   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:18.391618   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:18.391634   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:18.463287   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:18.463297   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:18.463309   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:18.536948   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:18.536967   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:21.064758   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:21.074846   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:21.074903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:21.099031   54219 cri.go:89] found id: ""
	I1212 19:59:21.099044   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.099051   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:21.099056   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:21.099109   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:21.123108   54219 cri.go:89] found id: ""
	I1212 19:59:21.123121   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.123127   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:21.123132   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:21.123187   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:21.146869   54219 cri.go:89] found id: ""
	I1212 19:59:21.146883   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.146890   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:21.146895   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:21.146964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:21.171309   54219 cri.go:89] found id: ""
	I1212 19:59:21.171323   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.171329   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:21.171340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:21.171395   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:21.195200   54219 cri.go:89] found id: ""
	I1212 19:59:21.195213   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.195219   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:21.195224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:21.195282   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:21.218648   54219 cri.go:89] found id: ""
	I1212 19:59:21.218661   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.218668   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:21.218673   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:21.218726   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:21.243375   54219 cri.go:89] found id: ""
	I1212 19:59:21.243388   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.243395   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:21.243402   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:21.243411   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:21.299185   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:21.299202   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:21.309826   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:21.309840   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:21.373437   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:21.373447   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:21.373457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:21.435817   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:21.435878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:23.968994   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:23.978907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:23.978964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:24.004005   54219 cri.go:89] found id: ""
	I1212 19:59:24.004018   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.004025   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:24.004030   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:24.004085   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:24.031561   54219 cri.go:89] found id: ""
	I1212 19:59:24.031576   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.031583   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:24.031588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:24.031648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:24.058089   54219 cri.go:89] found id: ""
	I1212 19:59:24.058105   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.058113   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:24.058120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:24.058183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:24.083693   54219 cri.go:89] found id: ""
	I1212 19:59:24.083707   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.083713   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:24.083718   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:24.083774   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:24.110732   54219 cri.go:89] found id: ""
	I1212 19:59:24.110746   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.110753   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:24.110758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:24.110814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:24.135252   54219 cri.go:89] found id: ""
	I1212 19:59:24.135266   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.135273   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:24.135278   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:24.135330   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:24.158751   54219 cri.go:89] found id: ""
	I1212 19:59:24.158765   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.158771   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:24.158779   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:24.158788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:24.188496   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:24.188513   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:24.244683   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:24.244701   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:24.255424   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:24.255440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:24.324102   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:24.324113   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:24.324126   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:26.896008   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:26.906451   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:26.906508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:26.930525   54219 cri.go:89] found id: ""
	I1212 19:59:26.930538   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.930546   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:26.930551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:26.930607   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:26.954197   54219 cri.go:89] found id: ""
	I1212 19:59:26.954212   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.954219   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:26.954224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:26.954284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:26.978362   54219 cri.go:89] found id: ""
	I1212 19:59:26.978375   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.978381   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:26.978388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:26.978444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:27.003156   54219 cri.go:89] found id: ""
	I1212 19:59:27.003170   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.003177   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:27.003182   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:27.003241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:27.035090   54219 cri.go:89] found id: ""
	I1212 19:59:27.035103   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.035110   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:27.035115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:27.035170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:27.059270   54219 cri.go:89] found id: ""
	I1212 19:59:27.059284   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.059291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:27.059296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:27.059351   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:27.083068   54219 cri.go:89] found id: ""
	I1212 19:59:27.083081   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.083088   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:27.083096   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:27.083105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:27.138962   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:27.138979   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:27.149646   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:27.149662   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:27.216025   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:27.216036   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:27.216046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:27.277808   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:27.277826   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:29.806087   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:29.816453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:29.816508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:29.839921   54219 cri.go:89] found id: ""
	I1212 19:59:29.839935   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.839943   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:29.839950   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:29.840023   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:29.868215   54219 cri.go:89] found id: ""
	I1212 19:59:29.868229   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.868236   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:29.868241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:29.868298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:29.892199   54219 cri.go:89] found id: ""
	I1212 19:59:29.892212   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.892219   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:29.892226   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:29.892281   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:29.921316   54219 cri.go:89] found id: ""
	I1212 19:59:29.921330   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.921336   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:29.921351   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:29.921415   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:29.946039   54219 cri.go:89] found id: ""
	I1212 19:59:29.946053   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.946059   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:29.946064   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:29.946125   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:29.976514   54219 cri.go:89] found id: ""
	I1212 19:59:29.976528   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.976536   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:29.976541   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:29.976601   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:30.000755   54219 cri.go:89] found id: ""
	I1212 19:59:30.000768   54219 logs.go:282] 0 containers: []
	W1212 19:59:30.000775   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:30.000783   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:30.000793   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:30.058301   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:30.058321   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:30.070295   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:30.070312   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:30.139764   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:30.139775   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:30.139786   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:30.203348   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:30.203371   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:32.732603   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:32.743210   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:32.743266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:32.775589   54219 cri.go:89] found id: ""
	I1212 19:59:32.775603   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.775610   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:32.775614   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:32.775673   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:32.799717   54219 cri.go:89] found id: ""
	I1212 19:59:32.799730   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.799737   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:32.799742   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:32.799801   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:32.826819   54219 cri.go:89] found id: ""
	I1212 19:59:32.826832   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.826839   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:32.826844   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:32.826902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:32.851752   54219 cri.go:89] found id: ""
	I1212 19:59:32.851765   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.851772   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:32.851777   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:32.851832   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:32.876003   54219 cri.go:89] found id: ""
	I1212 19:59:32.876017   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.876024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:32.876035   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:32.876093   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:32.902460   54219 cri.go:89] found id: ""
	I1212 19:59:32.902474   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.902480   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:32.902504   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:32.902560   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:32.925773   54219 cri.go:89] found id: ""
	I1212 19:59:32.925787   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.925793   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:32.925802   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:32.925812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:32.936160   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:32.936177   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:33.000494   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:33.000505   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:33.000515   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:33.066244   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:33.066264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:33.096113   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:33.096128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:35.653289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:35.663651   54219 kubeadm.go:602] duration metric: took 4m3.519380388s to restartPrimaryControlPlane
	W1212 19:59:35.663714   54219 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 19:59:35.663796   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 19:59:36.078838   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:59:36.092917   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:59:36.101391   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:59:36.101446   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:59:36.109781   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:59:36.109792   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 19:59:36.109842   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:59:36.118044   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:59:36.118100   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:59:36.125732   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:59:36.133647   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:59:36.133711   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:59:36.141349   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.149338   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:59:36.149401   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.156798   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:59:36.164406   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:59:36.164460   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:59:36.171816   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:59:36.215707   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:59:36.215925   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:59:36.287068   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:59:36.287132   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 19:59:36.287172   54219 kubeadm.go:319] OS: Linux
	I1212 19:59:36.287216   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:59:36.287263   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:59:36.287309   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:59:36.287356   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:59:36.287415   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:59:36.287462   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:59:36.287505   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:59:36.287552   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:59:36.287596   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:59:36.350092   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:59:36.350201   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:59:36.350291   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:59:36.357029   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:59:36.360551   54219 out.go:252]   - Generating certificates and keys ...
	I1212 19:59:36.360649   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:59:36.360718   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:59:36.360805   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 19:59:36.360872   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 19:59:36.360946   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 19:59:36.361003   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 19:59:36.361117   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 19:59:36.361314   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 19:59:36.361808   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 19:59:36.362227   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 19:59:36.362588   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 19:59:36.362716   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:59:36.513194   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:59:36.762182   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:59:37.087768   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:59:37.827220   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:59:38.025150   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:59:38.026038   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:59:38.030783   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:59:38.034177   54219 out.go:252]   - Booting up control plane ...
	I1212 19:59:38.034305   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:59:38.035144   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:59:38.036428   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:59:38.058524   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:59:38.058720   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:59:38.067348   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:59:38.067823   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:59:38.067969   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:59:38.202645   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:59:38.202775   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:03:38.203202   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000900998s
	I1212 20:03:38.203226   54219 kubeadm.go:319] 
	I1212 20:03:38.203283   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:03:38.203315   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:03:38.203419   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:03:38.203424   54219 kubeadm.go:319] 
	I1212 20:03:38.203527   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:03:38.203558   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:03:38.203588   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:03:38.203591   54219 kubeadm.go:319] 
	I1212 20:03:38.208746   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:03:38.209173   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:03:38.209280   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:03:38.209544   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:03:38.209548   54219 kubeadm.go:319] 
	I1212 20:03:38.209616   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:03:38.209718   54219 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000900998s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:03:38.209803   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 20:03:38.624272   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:03:38.637409   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:03:38.637464   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:03:38.645037   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:03:38.645047   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 20:03:38.645093   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:03:38.652503   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:03:38.652568   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:03:38.659596   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:03:38.667127   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:03:38.667190   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:03:38.674737   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.682321   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:03:38.682373   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.689635   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:03:38.696927   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:03:38.696978   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:03:38.704097   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:03:38.743640   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:03:38.743913   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:03:38.814950   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:03:38.815010   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:03:38.815042   54219 kubeadm.go:319] OS: Linux
	I1212 20:03:38.815098   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:03:38.815149   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:03:38.815192   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:03:38.815236   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:03:38.815280   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:03:38.815324   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:03:38.815365   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:03:38.815409   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:03:38.815451   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:03:38.887100   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:03:38.887197   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:03:38.887281   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:03:38.896370   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:03:38.901736   54219 out.go:252]   - Generating certificates and keys ...
	I1212 20:03:38.901817   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:03:38.901877   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:03:38.901950   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:03:38.902007   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:03:38.902071   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:03:38.902127   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:03:38.902186   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:03:38.902243   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:03:38.902321   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:03:38.902389   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:03:38.902423   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:03:38.902476   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:03:39.125808   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:03:39.338381   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:03:39.401460   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:03:39.625424   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:03:39.783055   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:03:39.783603   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:03:39.786147   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:03:39.789268   54219 out.go:252]   - Booting up control plane ...
	I1212 20:03:39.789370   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:03:39.789458   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:03:39.790103   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:03:39.810111   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:03:39.810207   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:03:39.818331   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:03:39.818818   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:03:39.818950   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:03:39.956538   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:03:39.956645   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:07:39.951298   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001147362s
	I1212 20:07:39.951324   54219 kubeadm.go:319] 
	I1212 20:07:39.951381   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:07:39.951413   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:07:39.951517   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:07:39.951522   54219 kubeadm.go:319] 
	I1212 20:07:39.951625   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:07:39.951656   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:07:39.951686   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:07:39.951689   54219 kubeadm.go:319] 
	I1212 20:07:39.955566   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:07:39.956028   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:07:39.956162   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:07:39.956426   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:07:39.956433   54219 kubeadm.go:319] 
	I1212 20:07:39.956501   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:07:39.956558   54219 kubeadm.go:403] duration metric: took 12m7.846093292s to StartCluster
	I1212 20:07:39.956588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:07:39.956652   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:07:39.984872   54219 cri.go:89] found id: ""
	I1212 20:07:39.984887   54219 logs.go:282] 0 containers: []
	W1212 20:07:39.984894   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:39.984900   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:07:39.984958   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:07:40.008408   54219 cri.go:89] found id: ""
	I1212 20:07:40.008426   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.008433   54219 logs.go:284] No container was found matching "etcd"
	I1212 20:07:40.008439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:07:40.008502   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:07:40.051885   54219 cri.go:89] found id: ""
	I1212 20:07:40.051899   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.051906   54219 logs.go:284] No container was found matching "coredns"
	I1212 20:07:40.051911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:07:40.051971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:07:40.078448   54219 cri.go:89] found id: ""
	I1212 20:07:40.078462   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.078469   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:40.078473   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:07:40.078533   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:07:40.105530   54219 cri.go:89] found id: ""
	I1212 20:07:40.105555   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.105562   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:40.105568   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:07:40.105632   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:07:40.134868   54219 cri.go:89] found id: ""
	I1212 20:07:40.134884   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.134911   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:40.134917   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:07:40.134977   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:07:40.160769   54219 cri.go:89] found id: ""
	I1212 20:07:40.160782   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.160789   54219 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:40.160798   54219 logs.go:123] Gathering logs for container status ...
	I1212 20:07:40.160808   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:40.187973   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:40.187990   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:40.250924   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:40.250942   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:40.266149   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:40.266165   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:40.328697   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:40.328707   54219 logs.go:123] Gathering logs for containerd ...
	I1212 20:07:40.328717   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 20:07:40.395302   54219 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:07:40.395340   54219 out.go:285] * 
	W1212 20:07:40.395406   54219 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.395426   54219 out.go:285] * 
	W1212 20:07:40.397542   54219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:07:40.403271   54219 out.go:203] 
	W1212 20:07:40.407023   54219 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.407080   54219 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:07:40.407103   54219 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:07:40.410913   54219 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409212463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409233845Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409270693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409288186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409297991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409313604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409322646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409334633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409357730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409389073Z" level=info msg="Connect containerd service"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409646705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.410157440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.430913489Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431535088Z" level=info msg="Start recovering state"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431784515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431871117Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469097271Z" level=info msg="Start event monitor"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469264239Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469333685Z" level=info msg="Start streaming server"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469389199Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469443014Z" level=info msg="runtime interface starting up..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469502196Z" level=info msg="starting plugins..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469562690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:55:30 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.471989321Z" level=info msg="containerd successfully booted in 0.083546s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:44.121863   21145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:44.122462   21145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:44.124104   21145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:44.124557   21145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:44.126056   21145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:07:44 up 50 min,  0 user,  load average: 0.31, 0.22, 0.36
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:07:41 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:41 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 20:07:41 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:41 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:41 functional-384006 kubelet[21014]: E1212 20:07:41.753100   21014 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:41 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:41 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:42 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 20:07:42 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:42 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:42 functional-384006 kubelet[21019]: E1212 20:07:42.506080   21019 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:42 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:42 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:43 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 12 20:07:43 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:43 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:43 functional-384006 kubelet[21053]: E1212 20:07:43.236572   21053 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:43 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:43 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:07:43 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 12 20:07:43 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:43 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:07:44 functional-384006 kubelet[21117]: E1212 20:07:44.001442   21117 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:07:44 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:07:44 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (366.847194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-384006 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-384006 apply -f testdata/invalidsvc.yaml: exit status 1 (54.82828ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-384006 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-384006 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-384006 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-384006 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-384006 --alsologtostderr -v=1] stderr:
I1212 20:10:13.872832   71705 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:13.873010   71705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:13.873031   71705 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:13.873039   71705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:13.873328   71705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:13.873598   71705 mustload.go:66] Loading cluster: functional-384006
I1212 20:10:13.874029   71705 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:13.874492   71705 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:13.892302   71705 host.go:66] Checking if "functional-384006" exists ...
I1212 20:10:13.892668   71705 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 20:10:13.957711   71705 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.940502211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 20:10:13.957822   71705 api_server.go:166] Checking apiserver status ...
I1212 20:10:13.957888   71705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 20:10:13.957929   71705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:13.975751   71705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
W1212 20:10:14.086009   71705 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 20:10:14.089233   71705 out.go:179] * The control-plane node functional-384006 apiserver is not running: (state=Stopped)
I1212 20:10:14.092222   71705 out.go:179]   To start a cluster, run: "minikube start -p functional-384006"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (322.138799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-384006 service hello-node --url                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh -- ls -la /mount-9p                                                                                                           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh cat /mount-9p/test-1765570203865063779                                                                                        │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh sudo umount -f /mount-9p                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo34640341/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh -- ls -la /mount-9p                                                                                                           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh sudo umount -f /mount-9p                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount1 --alsologtostderr -v=1                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount1                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount2 --alsologtostderr -v=1                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount3 --alsologtostderr -v=1                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount1                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh findmnt -T /mount2                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh findmnt -T /mount3                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ mount     │ -p functional-384006 --kill=true                                                                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-384006 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-384006 --alsologtostderr -v=1                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:13.630130   71633 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:13.630368   71633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.630398   71633 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:13.630418   71633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.630729   71633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:10:13.631190   71633 out.go:368] Setting JSON to false
	I1212 20:10:13.632211   71633 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3163,"bootTime":1765567051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:10:13.632321   71633 start.go:143] virtualization:  
	I1212 20:10:13.635605   71633 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:10:13.639456   71633 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:13.639530   71633 notify.go:221] Checking for updates...
	I1212 20:10:13.645472   71633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:13.648307   71633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:10:13.651207   71633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:10:13.654091   71633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:10:13.657072   71633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:13.660514   71633 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:13.661147   71633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:13.686958   71633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:10:13.687088   71633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.748518   71633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.739495162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.748622   71633 docker.go:319] overlay module found
	I1212 20:10:13.753675   71633 out.go:179] * Using the docker driver based on existing profile
	I1212 20:10:13.756658   71633 start.go:309] selected driver: docker
	I1212 20:10:13.756696   71633 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.756792   71633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:13.756901   71633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.815565   71633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.805182249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.816056   71633 cni.go:84] Creating CNI manager for ""
	I1212 20:10:13.816121   71633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:10:13.816169   71633 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.819358   71633 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409212463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409233845Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409270693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409288186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409297991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409313604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409322646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409334633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409357730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409389073Z" level=info msg="Connect containerd service"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409646705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.410157440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.430913489Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431535088Z" level=info msg="Start recovering state"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431784515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431871117Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469097271Z" level=info msg="Start event monitor"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469264239Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469333685Z" level=info msg="Start streaming server"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469389199Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469443014Z" level=info msg="runtime interface starting up..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469502196Z" level=info msg="starting plugins..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469562690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:55:30 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.471989321Z" level=info msg="containerd successfully booted in 0.083546s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:15.146970   23390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:15.147855   23390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:15.149540   23390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:15.149861   23390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:15.151416   23390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:10:15 up 52 min,  0 user,  load average: 0.54, 0.37, 0.40
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:10:11 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:12 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 523.
	Dec 12 20:10:12 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:12 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:12 functional-384006 kubelet[23249]: E1212 20:10:12.513583   23249 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:12 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:12 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:13 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 524.
	Dec 12 20:10:13 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:13 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:13 functional-384006 kubelet[23270]: E1212 20:10:13.251289   23270 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:13 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:13 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:13 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 525.
	Dec 12 20:10:13 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:13 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:14 functional-384006 kubelet[23276]: E1212 20:10:14.015565   23276 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 526.
	Dec 12 20:10:14 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:14 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:14 functional-384006 kubelet[23305]: E1212 20:10:14.761812   23305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (328.521902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 status: exit status 2 (324.719188ms)

                                                
                                                
-- stdout --
	functional-384006
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-384006 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (308.820409ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-384006 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 status -o json: exit status 2 (321.404186ms)

                                                
                                                
-- stdout --
	{"Name":"functional-384006","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-384006 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (301.903193ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-384006 service list                                                                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ service │ functional-384006 service list -o json                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ service │ functional-384006 service --namespace=default --https --url hello-node                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ service │ functional-384006 service hello-node --url --format={{.IP}}                                                                                       │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ service │ functional-384006 service hello-node --url                                                                                                        │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ ssh     │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount   │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh     │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh -- ls -la /mount-9p                                                                                                         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh cat /mount-9p/test-1765570203865063779                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                  │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh     │ functional-384006 ssh sudo umount -f /mount-9p                                                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ mount   │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo34640341/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh     │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh     │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh -- ls -la /mount-9p                                                                                                         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh sudo umount -f /mount-9p                                                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount   │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount1 --alsologtostderr -v=1              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh     │ functional-384006 ssh findmnt -T /mount1                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount   │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount2 --alsologtostderr -v=1              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount   │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount3 --alsologtostderr -v=1              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh     │ functional-384006 ssh findmnt -T /mount1                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh findmnt -T /mount2                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh     │ functional-384006 ssh findmnt -T /mount3                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ mount   │ -p functional-384006 --kill=true                                                                                                                  │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:55:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:55:27.852724   54219 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:55:27.853298   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853302   54219 out.go:374] Setting ErrFile to fd 2...
	I1212 19:55:27.853307   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853572   54219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:55:27.853965   54219 out.go:368] Setting JSON to false
	I1212 19:55:27.854729   54219 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2277,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:55:27.854784   54219 start.go:143] virtualization:  
	I1212 19:55:27.858422   54219 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:55:27.861585   54219 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:55:27.861670   54219 notify.go:221] Checking for updates...
	I1212 19:55:27.868224   54219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:55:27.871239   54219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:55:27.874218   54219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:55:27.877241   54219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:55:27.880290   54219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:55:27.883683   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:27.883824   54219 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:55:27.904994   54219 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:55:27.905107   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:27.972320   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:27.96314904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:27.972416   54219 docker.go:319] overlay module found
	I1212 19:55:27.975641   54219 out.go:179] * Using the docker driver based on existing profile
	I1212 19:55:27.978549   54219 start.go:309] selected driver: docker
	I1212 19:55:27.978557   54219 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:27.978631   54219 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:55:27.978726   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:28.035973   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:28.026224666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:28.036393   54219 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:55:28.036415   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:28.036463   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:28.036537   54219 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:28.039865   54219 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:55:28.042798   54219 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:55:28.046082   54219 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:55:28.048968   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:28.049006   54219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:55:28.049015   54219 cache.go:65] Caching tarball of preloaded images
	I1212 19:55:28.049057   54219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:55:28.049116   54219 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:55:28.049125   54219 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:55:28.049240   54219 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:55:28.070140   54219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:55:28.070152   54219 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:55:28.070172   54219 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:55:28.070201   54219 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:55:28.070267   54219 start.go:364] duration metric: took 47.145µs to acquireMachinesLock for "functional-384006"
	I1212 19:55:28.070285   54219 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:55:28.070289   54219 fix.go:54] fixHost starting: 
	I1212 19:55:28.070558   54219 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:55:28.087483   54219 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:55:28.087503   54219 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:55:28.090814   54219 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:55:28.090839   54219 machine.go:94] provisionDockerMachine start ...
	I1212 19:55:28.090929   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.108521   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.108845   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.108851   54219 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:55:28.259057   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.259071   54219 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:55:28.259129   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.275402   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.275704   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.275713   54219 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:55:28.436755   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.436820   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.461420   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.461717   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.461739   54219 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:55:28.612044   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:55:28.612060   54219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:55:28.612075   54219 ubuntu.go:190] setting up certificates
	I1212 19:55:28.612092   54219 provision.go:84] configureAuth start
	I1212 19:55:28.612163   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:28.632765   54219 provision.go:143] copyHostCerts
	I1212 19:55:28.632832   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:55:28.632839   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:55:28.632906   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:55:28.633087   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:55:28.633091   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:55:28.633116   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:55:28.633174   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:55:28.633178   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:55:28.633202   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:55:28.633253   54219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:55:28.793482   54219 provision.go:177] copyRemoteCerts
	I1212 19:55:28.793529   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:55:28.793567   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.810312   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:28.915572   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:55:28.933605   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:55:28.951138   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:55:28.968522   54219 provision.go:87] duration metric: took 356.418282ms to configureAuth
	I1212 19:55:28.968541   54219 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:55:28.968740   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:28.968745   54219 machine.go:97] duration metric: took 877.902402ms to provisionDockerMachine
	I1212 19:55:28.968752   54219 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:55:28.968762   54219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:55:28.968808   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:55:28.968851   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.987014   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.092173   54219 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:55:29.095606   54219 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:55:29.095622   54219 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:55:29.095634   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:55:29.095686   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:55:29.095770   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:55:29.095858   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:55:29.095909   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:55:29.103304   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:29.119777   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:55:29.137094   54219 start.go:296] duration metric: took 168.327905ms for postStartSetup
	I1212 19:55:29.137179   54219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:55:29.137221   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.155438   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.256753   54219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:55:29.261489   54219 fix.go:56] duration metric: took 1.191194255s for fixHost
	I1212 19:55:29.261504   54219 start.go:83] releasing machines lock for "functional-384006", held for 1.19123098s
	I1212 19:55:29.261570   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:29.278501   54219 ssh_runner.go:195] Run: cat /version.json
	I1212 19:55:29.278542   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.278786   54219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:55:29.278838   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.300866   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.303322   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.403647   54219 ssh_runner.go:195] Run: systemctl --version
	I1212 19:55:29.503423   54219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:55:29.507672   54219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:55:29.507733   54219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:55:29.515681   54219 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:55:29.515695   54219 start.go:496] detecting cgroup driver to use...
	I1212 19:55:29.515726   54219 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:55:29.515780   54219 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:55:29.531132   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:55:29.543869   54219 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:55:29.543922   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:55:29.559268   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:55:29.572058   54219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:55:29.685297   54219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:55:29.805225   54219 docker.go:234] disabling docker service ...
	I1212 19:55:29.805279   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:55:29.822098   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:55:29.834865   54219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:55:29.949324   54219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:55:30.087483   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:55:30.100955   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:55:30.116237   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:55:30.126127   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:55:30.136085   54219 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:55:30.136147   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:55:30.145914   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.154991   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:55:30.163972   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.172470   54219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:55:30.180930   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:55:30.190361   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:55:30.199337   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:55:30.208975   54219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:55:30.216623   54219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:55:30.223993   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.330122   54219 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:55:30.473295   54219 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:55:30.473369   54219 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:55:30.477639   54219 start.go:564] Will wait 60s for crictl version
	I1212 19:55:30.477693   54219 ssh_runner.go:195] Run: which crictl
	I1212 19:55:30.481548   54219 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:55:30.504633   54219 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:55:30.504687   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.523789   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.548955   54219 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:55:30.551786   54219 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:55:30.567944   54219 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:55:30.574767   54219 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 19:55:30.577669   54219 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:55:30.577791   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:30.577868   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.602150   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.602162   54219 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:55:30.602217   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.625907   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.625919   54219 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:55:30.625925   54219 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:55:30.626026   54219 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:55:30.626113   54219 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:55:30.649188   54219 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 19:55:30.649208   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:30.649216   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:30.649224   54219 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:55:30.649244   54219 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:55:30.649349   54219 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:55:30.649412   54219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:55:30.656757   54219 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:55:30.656810   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:55:30.663814   54219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:55:30.675878   54219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:55:30.688262   54219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 19:55:30.703971   54219 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:55:30.708408   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.839166   54219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:55:31.445221   54219 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:55:31.445232   54219 certs.go:195] generating shared ca certs ...
	I1212 19:55:31.445248   54219 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:55:31.445419   54219 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:55:31.445478   54219 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:55:31.445485   54219 certs.go:257] generating profile certs ...
	I1212 19:55:31.445581   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:55:31.445645   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:55:31.445694   54219 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:55:31.445823   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:55:31.445865   54219 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:55:31.445873   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:55:31.445899   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:55:31.445931   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:55:31.445954   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:55:31.446005   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:31.446654   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:55:31.468075   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:55:31.484808   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:55:31.501104   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:55:31.519018   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:55:31.536328   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:55:31.553581   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:55:31.570191   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:55:31.586954   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:55:31.603358   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:55:31.620509   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:55:31.637987   54219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:55:31.650484   54219 ssh_runner.go:195] Run: openssl version
	I1212 19:55:31.656450   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.663636   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:55:31.671141   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674842   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674900   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.715596   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:55:31.723059   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.730233   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:55:31.737626   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741161   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741213   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.783908   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:55:31.791542   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.799333   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:55:31.806999   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810570   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810630   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.851440   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:55:31.858926   54219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:55:31.862520   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:55:31.903666   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:55:31.944997   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:55:31.985858   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:55:32.026779   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:55:32.067925   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:55:32.110481   54219 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:32.110555   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:55:32.110624   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.136703   54219 cri.go:89] found id: ""
	I1212 19:55:32.136771   54219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:55:32.144223   54219 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:55:32.144262   54219 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:55:32.144312   54219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:55:32.151339   54219 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.151833   54219 kubeconfig.go:125] found "functional-384006" server: "https://192.168.49.2:8441"
	I1212 19:55:32.153024   54219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:55:32.160890   54219 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:40:57.602349197 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 19:55:30.697011388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 19:55:32.160901   54219 kubeadm.go:1161] stopping kube-system containers ...
	I1212 19:55:32.160919   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 19:55:32.160971   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.185826   54219 cri.go:89] found id: ""
	I1212 19:55:32.185884   54219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 19:55:32.204086   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:55:32.212130   54219 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 19:45 /etc/kubernetes/scheduler.conf
	
	I1212 19:55:32.212191   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:55:32.219934   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:55:32.227897   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.227949   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:55:32.235243   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.242858   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.242920   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.250701   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:55:32.258298   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.258372   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:55:32.265710   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:55:32.273454   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:32.324121   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:33.892385   54219 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568235814s)
	I1212 19:55:33.892459   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.100445   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.171354   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.217083   54219 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:55:34.217158   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:34.717278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.217351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.717787   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.217788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.218074   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.717373   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.218212   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.717990   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.217746   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.717717   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.217500   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.718081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.217959   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.717497   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.218218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.217997   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.217978   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.717885   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.217387   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.718121   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.217288   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.217318   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.717728   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.218067   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.717326   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.217512   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.717353   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.717983   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.217333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.717999   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.217773   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.717402   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.217334   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.218070   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.717712   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.217290   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.718107   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.217424   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.717836   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.217448   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.217955   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.717942   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.218252   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.717973   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.218214   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.718129   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.217818   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.218222   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.717312   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.217601   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.717316   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.217287   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.718088   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.717294   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.218217   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.717867   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.717349   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.217366   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.717546   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.218108   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.717381   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.217293   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.717333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.217921   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.717764   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.217784   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.718179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.218229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.717368   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.217920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.717247   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.218046   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.717383   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.218006   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.718040   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.217291   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.717910   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.218203   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.717788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.217278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.718149   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.217534   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.717322   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.218045   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.717355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.218081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.218208   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.717289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.217232   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.717930   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.218161   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.718192   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.217327   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.717452   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.218230   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.217306   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.717853   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.218101   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.717649   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.218027   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.718035   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.218050   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.717819   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.217245   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.717370   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:34.217941   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:34.218012   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:34.255372   54219 cri.go:89] found id: ""
	I1212 19:56:34.255386   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.255399   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:34.255404   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:34.255464   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:34.281284   54219 cri.go:89] found id: ""
	I1212 19:56:34.281297   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.281303   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:34.281308   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:34.281363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:34.304259   54219 cri.go:89] found id: ""
	I1212 19:56:34.304273   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.304279   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:34.304284   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:34.304338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:34.327600   54219 cri.go:89] found id: ""
	I1212 19:56:34.327613   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.327620   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:34.327625   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:34.327678   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:34.352303   54219 cri.go:89] found id: ""
	I1212 19:56:34.352317   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.352323   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:34.352328   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:34.352385   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:34.375938   54219 cri.go:89] found id: ""
	I1212 19:56:34.375951   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.375958   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:34.375963   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:34.376019   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:34.399635   54219 cri.go:89] found id: ""
	I1212 19:56:34.399648   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.399655   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:34.399663   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:34.399675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:34.457482   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:34.457501   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:34.467864   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:34.467879   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:34.532394   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:34.532405   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:34.532415   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:34.595426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:34.595444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:37.126278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:37.136103   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:37.136162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:37.160403   54219 cri.go:89] found id: ""
	I1212 19:56:37.160416   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.160422   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:37.160428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:37.160483   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:37.184487   54219 cri.go:89] found id: ""
	I1212 19:56:37.184500   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.184507   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:37.184512   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:37.184582   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:37.226352   54219 cri.go:89] found id: ""
	I1212 19:56:37.226366   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.226373   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:37.226378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:37.226435   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:37.258223   54219 cri.go:89] found id: ""
	I1212 19:56:37.258267   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.258274   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:37.258280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:37.258349   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:37.285540   54219 cri.go:89] found id: ""
	I1212 19:56:37.285554   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.285561   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:37.285566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:37.285622   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:37.309113   54219 cri.go:89] found id: ""
	I1212 19:56:37.309126   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.309132   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:37.309147   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:37.309226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:37.332041   54219 cri.go:89] found id: ""
	I1212 19:56:37.332054   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.332061   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:37.332069   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:37.332079   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:37.387421   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:37.387440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:37.397657   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:37.397672   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:37.461255   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:37.461265   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:37.461275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:37.523429   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:37.523446   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:40.054218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:40.066551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:40.066620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:40.099245   54219 cri.go:89] found id: ""
	I1212 19:56:40.099260   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.099267   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:40.099273   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:40.099336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:40.127637   54219 cri.go:89] found id: ""
	I1212 19:56:40.127653   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.127660   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:40.127666   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:40.127728   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:40.154877   54219 cri.go:89] found id: ""
	I1212 19:56:40.154892   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.154899   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:40.154904   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:40.154966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:40.186457   54219 cri.go:89] found id: ""
	I1212 19:56:40.186471   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.186478   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:40.186483   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:40.186540   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:40.223505   54219 cri.go:89] found id: ""
	I1212 19:56:40.223520   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.223527   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:40.223532   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:40.223589   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:40.264967   54219 cri.go:89] found id: ""
	I1212 19:56:40.264981   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.264987   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:40.264992   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:40.265064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:40.288851   54219 cri.go:89] found id: ""
	I1212 19:56:40.288865   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.288871   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:40.288879   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:40.288889   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:40.345104   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:40.345122   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:40.355393   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:40.355408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:40.421074   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:40.421086   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:40.421100   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:40.484292   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:40.484310   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:43.012558   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:43.022764   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:43.022820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:43.046602   54219 cri.go:89] found id: ""
	I1212 19:56:43.046617   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.046623   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:43.046628   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:43.046688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:43.070683   54219 cri.go:89] found id: ""
	I1212 19:56:43.070697   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.070703   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:43.070715   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:43.070769   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:43.094890   54219 cri.go:89] found id: ""
	I1212 19:56:43.094904   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.094911   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:43.094915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:43.094971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:43.123965   54219 cri.go:89] found id: ""
	I1212 19:56:43.123978   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.123984   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:43.123989   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:43.124043   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:43.149003   54219 cri.go:89] found id: ""
	I1212 19:56:43.149017   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.149024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:43.149028   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:43.149084   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:43.177565   54219 cri.go:89] found id: ""
	I1212 19:56:43.177578   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.177584   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:43.177589   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:43.177654   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:43.203765   54219 cri.go:89] found id: ""
	I1212 19:56:43.203779   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.203785   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:43.203793   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:43.203803   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:43.267789   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:43.267807   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:43.278476   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:43.278493   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:43.342414   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:43.342426   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:43.342436   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:43.406378   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:43.406398   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:45.939180   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:45.950923   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:45.950984   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:45.980081   54219 cri.go:89] found id: ""
	I1212 19:56:45.980095   54219 logs.go:282] 0 containers: []
	W1212 19:56:45.980102   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:45.980106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:45.980162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:46.008401   54219 cri.go:89] found id: ""
	I1212 19:56:46.008417   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.008425   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:46.008431   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:46.008500   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:46.037350   54219 cri.go:89] found id: ""
	I1212 19:56:46.037364   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.037382   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:46.037388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:46.037447   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:46.062477   54219 cri.go:89] found id: ""
	I1212 19:56:46.062491   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.062498   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:46.062503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:46.062562   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:46.088314   54219 cri.go:89] found id: ""
	I1212 19:56:46.088328   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.088335   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:46.088340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:46.088397   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:46.118483   54219 cri.go:89] found id: ""
	I1212 19:56:46.118496   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.118503   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:46.118513   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:46.118574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:46.142723   54219 cri.go:89] found id: ""
	I1212 19:56:46.142737   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.142744   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:46.142752   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:46.142773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:46.213691   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:46.213700   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:46.213710   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:46.286149   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:46.286168   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:46.313728   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:46.313743   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:46.372694   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:46.372711   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:48.883344   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:48.893476   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:48.893532   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:48.917365   54219 cri.go:89] found id: ""
	I1212 19:56:48.917379   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.917386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:48.917391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:48.917446   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:48.941342   54219 cri.go:89] found id: ""
	I1212 19:56:48.941356   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.941363   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:48.941367   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:48.941428   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:48.966988   54219 cri.go:89] found id: ""
	I1212 19:56:48.967001   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.967008   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:48.967013   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:48.967070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:48.990387   54219 cri.go:89] found id: ""
	I1212 19:56:48.990400   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.990407   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:48.990412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:48.990474   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:49.016237   54219 cri.go:89] found id: ""
	I1212 19:56:49.016251   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.016257   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:49.016263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:49.016334   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:49.040263   54219 cri.go:89] found id: ""
	I1212 19:56:49.040276   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.040283   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:49.040289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:49.040346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:49.064604   54219 cri.go:89] found id: ""
	I1212 19:56:49.064618   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.064625   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:49.064633   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:49.064643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:49.122132   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:49.122150   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:49.132901   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:49.132916   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:49.203010   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:49.203028   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:49.203038   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:49.277223   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:49.277242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:51.807432   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:51.817646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:51.817706   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:51.843424   54219 cri.go:89] found id: ""
	I1212 19:56:51.843438   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.843444   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:51.843449   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:51.843510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:51.868210   54219 cri.go:89] found id: ""
	I1212 19:56:51.868223   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.868230   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:51.868235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:51.868290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:51.892493   54219 cri.go:89] found id: ""
	I1212 19:56:51.892506   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.892513   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:51.892518   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:51.892577   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:51.917111   54219 cri.go:89] found id: ""
	I1212 19:56:51.917124   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.917143   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:51.917148   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:51.917203   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:51.945367   54219 cri.go:89] found id: ""
	I1212 19:56:51.945381   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.945387   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:51.945392   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:51.945449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:51.970026   54219 cri.go:89] found id: ""
	I1212 19:56:51.970040   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.970047   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:51.970053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:51.970108   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:51.994534   54219 cri.go:89] found id: ""
	I1212 19:56:51.994547   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.994553   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:51.994563   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:51.994573   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:52.028818   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:52.028848   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:52.090429   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:52.090450   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:52.101879   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:52.101895   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:52.171776   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:52.171787   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:52.171800   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:54.740626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:54.750925   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:54.750995   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:54.780366   54219 cri.go:89] found id: ""
	I1212 19:56:54.780379   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.780386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:54.780391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:54.780449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:54.804094   54219 cri.go:89] found id: ""
	I1212 19:56:54.804107   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.804113   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:54.804118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:54.804173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:54.828262   54219 cri.go:89] found id: ""
	I1212 19:56:54.828276   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.828283   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:54.828288   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:54.828346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:54.851328   54219 cri.go:89] found id: ""
	I1212 19:56:54.851340   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.851347   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:54.851352   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:54.851406   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:54.874948   54219 cri.go:89] found id: ""
	I1212 19:56:54.874971   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.874978   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:54.874983   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:54.875049   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:54.899059   54219 cri.go:89] found id: ""
	I1212 19:56:54.899072   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.899079   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:54.899085   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:54.899139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:54.922912   54219 cri.go:89] found id: ""
	I1212 19:56:54.922944   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.922952   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:54.922959   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:54.922969   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:54.982944   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:54.982963   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:54.993620   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:54.993643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:55.063883   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:55.063895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:55.063905   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:55.126641   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:55.126661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:57.654341   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:57.664332   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:57.664398   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:57.690295   54219 cri.go:89] found id: ""
	I1212 19:56:57.690312   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.690319   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:57.690324   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:57.690378   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:57.715389   54219 cri.go:89] found id: ""
	I1212 19:56:57.715403   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.715409   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:57.715414   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:57.715485   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:57.741214   54219 cri.go:89] found id: ""
	I1212 19:56:57.741228   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.741234   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:57.741239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:57.741302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:57.766791   54219 cri.go:89] found id: ""
	I1212 19:56:57.766804   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.766811   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:57.766817   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:57.766876   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:57.791413   54219 cri.go:89] found id: ""
	I1212 19:56:57.791427   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.791434   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:57.791439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:57.791494   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:57.815197   54219 cri.go:89] found id: ""
	I1212 19:56:57.815211   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.815218   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:57.815223   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:57.815291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:57.839238   54219 cri.go:89] found id: ""
	I1212 19:56:57.839251   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.839258   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:57.839265   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:57.839275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:57.895387   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:57.895408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:57.906723   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:57.906738   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:57.970462   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:57.970473   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:57.970483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:58.035426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:58.035459   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.567794   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:00.577750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:00.577811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:00.601472   54219 cri.go:89] found id: ""
	I1212 19:57:00.601485   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.601492   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:00.601497   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:00.601552   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:00.624990   54219 cri.go:89] found id: ""
	I1212 19:57:00.625003   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.625009   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:00.625014   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:00.625069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:00.652831   54219 cri.go:89] found id: ""
	I1212 19:57:00.652845   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.652852   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:00.652857   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:00.652913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:00.676463   54219 cri.go:89] found id: ""
	I1212 19:57:00.676477   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.676484   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:00.676489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:00.676544   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:00.700820   54219 cri.go:89] found id: ""
	I1212 19:57:00.700833   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.700840   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:00.700845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:00.700904   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:00.728048   54219 cri.go:89] found id: ""
	I1212 19:57:00.728061   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.728068   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:00.728073   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:00.728129   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:00.754114   54219 cri.go:89] found id: ""
	I1212 19:57:00.754127   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.754134   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:00.754142   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:00.754152   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.783733   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:00.783749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:00.842004   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:00.842021   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:00.852440   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:00.852455   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:00.914781   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:00.914792   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:00.914802   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.477311   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:03.488847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:03.488902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:03.517173   54219 cri.go:89] found id: ""
	I1212 19:57:03.517186   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.517194   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:03.517198   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:03.517266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:03.545723   54219 cri.go:89] found id: ""
	I1212 19:57:03.545737   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.545750   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:03.545755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:03.545812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:03.572600   54219 cri.go:89] found id: ""
	I1212 19:57:03.572614   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.572622   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:03.572626   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:03.572688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:03.597001   54219 cri.go:89] found id: ""
	I1212 19:57:03.597015   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.597026   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:03.597031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:03.597088   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:03.625021   54219 cri.go:89] found id: ""
	I1212 19:57:03.625034   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.625041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:03.625046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:03.625104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:03.653842   54219 cri.go:89] found id: ""
	I1212 19:57:03.653856   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.653864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:03.653869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:03.653926   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:03.677783   54219 cri.go:89] found id: ""
	I1212 19:57:03.677797   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.677804   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:03.677812   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:03.677822   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:03.736594   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:03.736617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:03.747247   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:03.747264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:03.809956   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:03.809965   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:03.809987   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.871011   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:03.871029   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.399328   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:06.409365   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:06.409423   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:06.433061   54219 cri.go:89] found id: ""
	I1212 19:57:06.433075   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.433082   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:06.433094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:06.433154   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:06.481872   54219 cri.go:89] found id: ""
	I1212 19:57:06.481886   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.481893   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:06.481898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:06.481954   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:06.510179   54219 cri.go:89] found id: ""
	I1212 19:57:06.510192   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.510200   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:06.510204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:06.510264   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:06.543022   54219 cri.go:89] found id: ""
	I1212 19:57:06.543036   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.543043   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:06.543048   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:06.543104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:06.570071   54219 cri.go:89] found id: ""
	I1212 19:57:06.570091   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.570100   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:06.570105   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:06.570170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:06.599741   54219 cri.go:89] found id: ""
	I1212 19:57:06.599754   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.599761   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:06.599779   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:06.599858   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:06.624514   54219 cri.go:89] found id: ""
	I1212 19:57:06.624528   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.624534   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:06.624542   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:06.624553   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:06.635592   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:06.635610   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:06.702713   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:06.702724   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:06.702734   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:06.765240   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:06.765258   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.793023   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:06.793039   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:09.351721   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:09.361738   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:09.361798   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:09.386854   54219 cri.go:89] found id: ""
	I1212 19:57:09.386867   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.386875   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:09.386880   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:09.386944   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:09.412114   54219 cri.go:89] found id: ""
	I1212 19:57:09.412127   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.412134   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:09.412139   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:09.412197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:09.449831   54219 cri.go:89] found id: ""
	I1212 19:57:09.449844   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.449854   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:09.449859   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:09.449913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:09.478096   54219 cri.go:89] found id: ""
	I1212 19:57:09.478109   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.478127   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:09.478133   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:09.478205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:09.509051   54219 cri.go:89] found id: ""
	I1212 19:57:09.509064   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.509072   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:09.509077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:09.509140   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:09.533239   54219 cri.go:89] found id: ""
	I1212 19:57:09.533253   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.533259   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:09.533265   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:09.533320   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:09.559093   54219 cri.go:89] found id: ""
	I1212 19:57:09.559108   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.559114   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:09.559122   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:09.559144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:09.569994   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:09.570010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:09.632936   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:09.632947   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:09.632957   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:09.694797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:09.694815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:09.723095   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:09.723124   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.279206   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:12.289157   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:12.289218   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:12.314051   54219 cri.go:89] found id: ""
	I1212 19:57:12.314065   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.314071   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:12.314077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:12.314146   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:12.338981   54219 cri.go:89] found id: ""
	I1212 19:57:12.338995   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.339002   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:12.339007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:12.339064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:12.364272   54219 cri.go:89] found id: ""
	I1212 19:57:12.364285   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.364294   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:12.364299   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:12.364356   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:12.388633   54219 cri.go:89] found id: ""
	I1212 19:57:12.388647   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.388654   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:12.388659   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:12.388717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:12.412315   54219 cri.go:89] found id: ""
	I1212 19:57:12.412330   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.412337   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:12.412342   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:12.412399   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:12.435919   54219 cri.go:89] found id: ""
	I1212 19:57:12.435932   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.435938   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:12.435944   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:12.436010   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:12.464586   54219 cri.go:89] found id: ""
	I1212 19:57:12.464600   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.464607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:12.464615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:12.464625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.531126   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:12.531144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:12.541720   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:12.541737   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:12.607440   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:12.607450   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:12.607460   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:12.669638   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:12.669657   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.197082   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:15.207136   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:15.207197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:15.232075   54219 cri.go:89] found id: ""
	I1212 19:57:15.232089   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.232095   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:15.232101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:15.232159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:15.256640   54219 cri.go:89] found id: ""
	I1212 19:57:15.256654   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.256661   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:15.256668   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:15.256725   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:15.281708   54219 cri.go:89] found id: ""
	I1212 19:57:15.281722   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.281729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:15.281751   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:15.281811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:15.306602   54219 cri.go:89] found id: ""
	I1212 19:57:15.306615   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.306622   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:15.306627   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:15.306683   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:15.330704   54219 cri.go:89] found id: ""
	I1212 19:57:15.330718   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.330724   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:15.330730   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:15.330788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:15.356237   54219 cri.go:89] found id: ""
	I1212 19:57:15.356251   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.356258   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:15.356263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:15.356322   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:15.384137   54219 cri.go:89] found id: ""
	I1212 19:57:15.384149   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.384155   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:15.384163   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:15.384174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:15.394815   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:15.394831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:15.464384   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:15.464402   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:15.464413   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:15.531093   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:15.531112   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.558272   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:15.558287   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.114881   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:18.124888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:18.124947   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:18.153733   54219 cri.go:89] found id: ""
	I1212 19:57:18.153747   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.153753   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:18.153758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:18.153819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:18.179987   54219 cri.go:89] found id: ""
	I1212 19:57:18.180001   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.180007   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:18.180012   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:18.180069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:18.208210   54219 cri.go:89] found id: ""
	I1212 19:57:18.208223   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.208230   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:18.208235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:18.208290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:18.240237   54219 cri.go:89] found id: ""
	I1212 19:57:18.240252   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.240258   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:18.240263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:18.240321   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:18.263335   54219 cri.go:89] found id: ""
	I1212 19:57:18.263349   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.263356   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:18.263361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:18.263416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:18.286920   54219 cri.go:89] found id: ""
	I1212 19:57:18.286933   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.286940   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:18.286945   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:18.286999   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:18.311040   54219 cri.go:89] found id: ""
	I1212 19:57:18.311053   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.311060   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:18.311068   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:18.311077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.366520   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:18.366538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:18.376885   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:18.376903   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:18.439989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:18.440010   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:18.440020   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:18.511364   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:18.511384   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:21.043380   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:21.053290   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:21.053345   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:21.077334   54219 cri.go:89] found id: ""
	I1212 19:57:21.077348   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.077355   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:21.077360   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:21.077424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:21.102108   54219 cri.go:89] found id: ""
	I1212 19:57:21.102122   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.102129   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:21.102141   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:21.102198   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:21.125941   54219 cri.go:89] found id: ""
	I1212 19:57:21.125955   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.125962   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:21.125967   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:21.126022   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:21.150198   54219 cri.go:89] found id: ""
	I1212 19:57:21.150211   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.150218   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:21.150229   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:21.150284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:21.177722   54219 cri.go:89] found id: ""
	I1212 19:57:21.177736   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.177743   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:21.177748   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:21.177806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:21.205490   54219 cri.go:89] found id: ""
	I1212 19:57:21.205504   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.205511   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:21.205516   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:21.205574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:21.230104   54219 cri.go:89] found id: ""
	I1212 19:57:21.230118   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.230125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:21.230132   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:21.230148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:21.286638   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:21.286655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:21.297043   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:21.297058   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:21.358837   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:21.358847   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:21.358858   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:21.425656   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:21.425676   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:23.965162   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:23.974936   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:23.975001   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:23.998921   54219 cri.go:89] found id: ""
	I1212 19:57:23.998935   54219 logs.go:282] 0 containers: []
	W1212 19:57:23.998942   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:23.998947   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:23.999007   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:24.028254   54219 cri.go:89] found id: ""
	I1212 19:57:24.028283   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.028291   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:24.028296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:24.028365   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:24.053461   54219 cri.go:89] found id: ""
	I1212 19:57:24.053475   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.053482   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:24.053487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:24.053546   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:24.082160   54219 cri.go:89] found id: ""
	I1212 19:57:24.082175   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.082182   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:24.082187   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:24.082247   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:24.111368   54219 cri.go:89] found id: ""
	I1212 19:57:24.111381   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.111388   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:24.111394   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:24.111452   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:24.139886   54219 cri.go:89] found id: ""
	I1212 19:57:24.139900   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.139907   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:24.139912   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:24.139966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:24.165622   54219 cri.go:89] found id: ""
	I1212 19:57:24.165636   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.165644   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:24.165652   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:24.165661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:24.223024   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:24.223042   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:24.234034   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:24.234049   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:24.300286   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:24.300298   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:24.300308   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:24.366297   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:24.366324   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:26.892882   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:26.903710   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:26.903767   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:26.928734   54219 cri.go:89] found id: ""
	I1212 19:57:26.928748   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.928754   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:26.928759   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:26.928815   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:26.951741   54219 cri.go:89] found id: ""
	I1212 19:57:26.951754   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.951760   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:26.951765   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:26.951820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:26.977319   54219 cri.go:89] found id: ""
	I1212 19:57:26.977332   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.977339   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:26.977343   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:26.977396   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:27.005917   54219 cri.go:89] found id: ""
	I1212 19:57:27.005931   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.005937   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:27.005942   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:27.005997   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:27.031546   54219 cri.go:89] found id: ""
	I1212 19:57:27.031561   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.031568   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:27.031573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:27.031630   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:27.055510   54219 cri.go:89] found id: ""
	I1212 19:57:27.055524   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.055530   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:27.055535   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:27.055593   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:27.083350   54219 cri.go:89] found id: ""
	I1212 19:57:27.083364   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.083370   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:27.083389   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:27.083400   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:27.111521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:27.111542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:27.166541   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:27.166558   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:27.177159   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:27.177174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:27.242522   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:27.242532   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:27.242542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:29.804626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:29.814577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:29.814643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:29.840378   54219 cri.go:89] found id: ""
	I1212 19:57:29.840391   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.840398   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:29.840403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:29.840462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:29.868144   54219 cri.go:89] found id: ""
	I1212 19:57:29.868157   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.868163   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:29.868168   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:29.868227   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:29.893720   54219 cri.go:89] found id: ""
	I1212 19:57:29.893734   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.893740   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:29.893745   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:29.893812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:29.922305   54219 cri.go:89] found id: ""
	I1212 19:57:29.922319   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.922326   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:29.922331   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:29.922386   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:29.946347   54219 cri.go:89] found id: ""
	I1212 19:57:29.946366   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.946373   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:29.946378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:29.946434   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:29.971074   54219 cri.go:89] found id: ""
	I1212 19:57:29.971087   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.971094   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:29.971099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:29.971158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:29.994674   54219 cri.go:89] found id: ""
	I1212 19:57:29.994697   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.994704   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:29.994712   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:29.994723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:30.005086   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:30.005108   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:30.083562   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:30.083572   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:30.083582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:30.146070   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:30.146089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:30.178521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:30.178538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:32.735968   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:32.746704   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:32.746766   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:32.773559   54219 cri.go:89] found id: ""
	I1212 19:57:32.773573   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.773579   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:32.773584   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:32.773647   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:32.796720   54219 cri.go:89] found id: ""
	I1212 19:57:32.796733   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.796749   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:32.796755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:32.796809   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:32.819740   54219 cri.go:89] found id: ""
	I1212 19:57:32.819754   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.819761   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:32.819766   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:32.819824   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:32.845383   54219 cri.go:89] found id: ""
	I1212 19:57:32.845396   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.845404   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:32.845409   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:32.845463   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:32.868404   54219 cri.go:89] found id: ""
	I1212 19:57:32.868417   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.868423   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:32.868428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:32.868482   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:32.893264   54219 cri.go:89] found id: ""
	I1212 19:57:32.893278   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.893284   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:32.893289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:32.893342   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:32.918080   54219 cri.go:89] found id: ""
	I1212 19:57:32.918103   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.918111   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:32.918124   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:32.918134   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:32.983660   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:32.983671   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:32.983682   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:33.050130   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:33.050155   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:33.077660   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:33.077675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:33.136010   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:33.136028   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.647123   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:35.656832   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:35.656887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:35.680780   54219 cri.go:89] found id: ""
	I1212 19:57:35.680793   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.680800   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:35.680805   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:35.680863   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:35.710149   54219 cri.go:89] found id: ""
	I1212 19:57:35.710163   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.710171   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:35.710175   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:35.710233   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:35.737709   54219 cri.go:89] found id: ""
	I1212 19:57:35.737722   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.737729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:35.737734   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:35.737788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:35.763960   54219 cri.go:89] found id: ""
	I1212 19:57:35.763974   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.763986   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:35.763991   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:35.764053   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:35.796697   54219 cri.go:89] found id: ""
	I1212 19:57:35.796710   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.796718   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:35.796722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:35.796782   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:35.820208   54219 cri.go:89] found id: ""
	I1212 19:57:35.820222   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.820229   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:35.820234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:35.820289   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:35.845107   54219 cri.go:89] found id: ""
	I1212 19:57:35.845121   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.845128   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:35.845135   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:35.845148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:35.904798   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:35.904816   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.915282   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:35.915297   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:35.980125   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:35.980135   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:35.980146   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:36.042456   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:36.042476   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:38.571541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:38.581597   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:38.581658   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:38.604774   54219 cri.go:89] found id: ""
	I1212 19:57:38.604787   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.604794   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:38.604799   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:38.604853   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:38.630065   54219 cri.go:89] found id: ""
	I1212 19:57:38.630079   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.630085   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:38.630090   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:38.630151   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:38.654890   54219 cri.go:89] found id: ""
	I1212 19:57:38.654903   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.654910   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:38.654915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:38.654970   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:38.682669   54219 cri.go:89] found id: ""
	I1212 19:57:38.682684   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.682691   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:38.682696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:38.682753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:38.728209   54219 cri.go:89] found id: ""
	I1212 19:57:38.728227   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.728244   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:38.728249   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:38.728317   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:38.757740   54219 cri.go:89] found id: ""
	I1212 19:57:38.757753   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.757768   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:38.757774   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:38.757829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:38.785300   54219 cri.go:89] found id: ""
	I1212 19:57:38.785314   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.785321   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:38.785328   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:38.785338   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:38.841797   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:38.841815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:38.852807   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:38.852823   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:38.918575   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:38.918585   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:38.918596   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:38.980647   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:38.980666   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:41.508125   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:41.518560   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:41.518620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:41.543483   54219 cri.go:89] found id: ""
	I1212 19:57:41.543497   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.543504   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:41.543509   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:41.543565   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:41.568460   54219 cri.go:89] found id: ""
	I1212 19:57:41.568474   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.568481   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:41.568485   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:41.568541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:41.592454   54219 cri.go:89] found id: ""
	I1212 19:57:41.592468   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.592475   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:41.592480   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:41.592537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:41.616514   54219 cri.go:89] found id: ""
	I1212 19:57:41.616528   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.616535   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:41.616540   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:41.616600   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:41.640661   54219 cri.go:89] found id: ""
	I1212 19:57:41.640675   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.640681   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:41.640686   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:41.640741   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:41.668228   54219 cri.go:89] found id: ""
	I1212 19:57:41.668241   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.668248   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:41.668254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:41.668315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:41.694010   54219 cri.go:89] found id: ""
	I1212 19:57:41.694023   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.694030   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:41.694048   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:41.694057   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:41.759133   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:41.759153   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:41.770184   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:41.770200   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:41.834777   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:41.834788   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:41.834798   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:41.896691   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:41.896709   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:44.424748   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:44.434763   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:44.434819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:44.458808   54219 cri.go:89] found id: ""
	I1212 19:57:44.458821   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.458833   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:44.458839   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:44.458895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:44.484932   54219 cri.go:89] found id: ""
	I1212 19:57:44.484945   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.484951   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:44.484956   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:44.485013   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:44.509964   54219 cri.go:89] found id: ""
	I1212 19:57:44.509978   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.509985   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:44.509990   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:44.510047   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:44.538212   54219 cri.go:89] found id: ""
	I1212 19:57:44.538226   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.538233   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:44.538239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:44.538295   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:44.563029   54219 cri.go:89] found id: ""
	I1212 19:57:44.563043   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.563050   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:44.563058   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:44.563116   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:44.594560   54219 cri.go:89] found id: ""
	I1212 19:57:44.594573   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.594580   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:44.594585   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:44.594648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:44.618882   54219 cri.go:89] found id: ""
	I1212 19:57:44.618896   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.618903   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:44.618910   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:44.618921   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:44.674635   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:44.674653   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:44.685377   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:44.685392   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:44.767577   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:44.767587   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:44.767599   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:44.830883   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:44.830901   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:47.361584   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:47.371608   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:47.371664   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:47.397902   54219 cri.go:89] found id: ""
	I1212 19:57:47.397915   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.397922   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:47.397927   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:47.397983   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:47.421839   54219 cri.go:89] found id: ""
	I1212 19:57:47.421852   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.421859   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:47.421864   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:47.421920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:47.444814   54219 cri.go:89] found id: ""
	I1212 19:57:47.444829   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.444836   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:47.444841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:47.444895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:47.470743   54219 cri.go:89] found id: ""
	I1212 19:57:47.470758   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.470765   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:47.470770   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:47.470829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:47.494189   54219 cri.go:89] found id: ""
	I1212 19:57:47.494202   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.494209   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:47.494214   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:47.494271   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:47.522490   54219 cri.go:89] found id: ""
	I1212 19:57:47.522504   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.522510   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:47.522515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:47.522573   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:47.546914   54219 cri.go:89] found id: ""
	I1212 19:57:47.546929   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.546938   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:47.546948   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:47.546960   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:47.602569   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:47.602586   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:47.613063   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:47.613077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:47.675404   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:47.675413   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:47.675424   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:47.744526   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:47.744545   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:50.275957   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:50.285985   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:50.286042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:50.310834   54219 cri.go:89] found id: ""
	I1212 19:57:50.310848   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.310855   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:50.310860   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:50.310915   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:50.335949   54219 cri.go:89] found id: ""
	I1212 19:57:50.335962   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.335969   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:50.335973   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:50.336042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:50.361218   54219 cri.go:89] found id: ""
	I1212 19:57:50.361233   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.361239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:50.361244   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:50.361302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:50.389990   54219 cri.go:89] found id: ""
	I1212 19:57:50.390004   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.390011   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:50.390016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:50.390070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:50.414872   54219 cri.go:89] found id: ""
	I1212 19:57:50.414886   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.414893   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:50.414898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:50.414957   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:50.439081   54219 cri.go:89] found id: ""
	I1212 19:57:50.439094   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.439102   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:50.439106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:50.439162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:50.463124   54219 cri.go:89] found id: ""
	I1212 19:57:50.463137   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.463144   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:50.463151   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:50.463160   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:50.519197   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:50.519217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:50.529678   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:50.529697   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:50.593926   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:50.593936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:50.593946   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:50.663627   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:50.663647   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.195155   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:53.205007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:53.205065   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:53.228910   54219 cri.go:89] found id: ""
	I1212 19:57:53.228924   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.228930   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:53.228935   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:53.228992   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:53.256269   54219 cri.go:89] found id: ""
	I1212 19:57:53.256282   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.256289   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:53.256294   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:53.256363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:53.279490   54219 cri.go:89] found id: ""
	I1212 19:57:53.279505   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.279512   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:53.279517   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:53.279575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:53.303201   54219 cri.go:89] found id: ""
	I1212 19:57:53.303215   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.303222   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:53.303227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:53.303285   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:53.331320   54219 cri.go:89] found id: ""
	I1212 19:57:53.331333   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.331349   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:53.331354   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:53.331424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:53.355603   54219 cri.go:89] found id: ""
	I1212 19:57:53.355617   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.355624   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:53.355629   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:53.355685   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:53.380364   54219 cri.go:89] found id: ""
	I1212 19:57:53.380378   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.380385   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:53.380394   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:53.380405   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:53.448989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:53.449000   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:53.449010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:53.516879   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:53.516908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.550642   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:53.550661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:53.608676   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:53.608694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.120012   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:56.129790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:56.129852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:56.154949   54219 cri.go:89] found id: ""
	I1212 19:57:56.154963   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.154969   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:56.154974   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:56.155029   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:56.178218   54219 cri.go:89] found id: ""
	I1212 19:57:56.178232   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.178240   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:56.178254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:56.178311   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:56.202037   54219 cri.go:89] found id: ""
	I1212 19:57:56.202053   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.202060   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:56.202065   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:56.202127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:56.226077   54219 cri.go:89] found id: ""
	I1212 19:57:56.226106   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.226114   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:56.226120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:56.226183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:56.249790   54219 cri.go:89] found id: ""
	I1212 19:57:56.249803   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.249810   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:56.249815   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:56.249868   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:56.273767   54219 cri.go:89] found id: ""
	I1212 19:57:56.273780   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.273787   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:56.273793   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:56.273851   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:56.301574   54219 cri.go:89] found id: ""
	I1212 19:57:56.301587   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.301594   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:56.301602   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:56.301612   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:56.362705   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:56.362723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.373142   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:56.373166   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:56.434197   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:56.434207   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:56.434217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:56.497280   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:56.497298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:59.029935   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:59.040115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:59.040173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:59.064443   54219 cri.go:89] found id: ""
	I1212 19:57:59.064458   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.064465   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:59.064470   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:59.064525   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:59.089160   54219 cri.go:89] found id: ""
	I1212 19:57:59.089173   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.089180   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:59.089185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:59.089250   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:59.113771   54219 cri.go:89] found id: ""
	I1212 19:57:59.113785   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.113792   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:59.113797   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:59.113852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:59.141148   54219 cri.go:89] found id: ""
	I1212 19:57:59.141162   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.141169   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:59.141174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:59.141241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:59.163991   54219 cri.go:89] found id: ""
	I1212 19:57:59.164005   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.164011   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:59.164016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:59.164076   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:59.189011   54219 cri.go:89] found id: ""
	I1212 19:57:59.189026   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.189033   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:59.189038   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:59.189092   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:59.213106   54219 cri.go:89] found id: ""
	I1212 19:57:59.213119   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.213125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:59.213133   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:59.213143   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:59.268036   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:59.268054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:59.278468   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:59.278483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:59.343881   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:59.343891   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:59.343909   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:59.406439   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:59.406457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:01.935967   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:01.947272   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:01.947331   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:01.980222   54219 cri.go:89] found id: ""
	I1212 19:58:01.980235   54219 logs.go:282] 0 containers: []
	W1212 19:58:01.980251   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:01.980257   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:01.980314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:02.009777   54219 cri.go:89] found id: ""
	I1212 19:58:02.009794   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.009802   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:02.009808   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:02.009899   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:02.042576   54219 cri.go:89] found id: ""
	I1212 19:58:02.042591   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.042598   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:02.042603   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:02.042680   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:02.067370   54219 cri.go:89] found id: ""
	I1212 19:58:02.067384   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.067392   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:02.067397   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:02.067462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:02.096410   54219 cri.go:89] found id: ""
	I1212 19:58:02.096423   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.096430   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:02.096436   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:02.096495   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:02.120186   54219 cri.go:89] found id: ""
	I1212 19:58:02.120200   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.120207   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:02.120212   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:02.120272   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:02.146219   54219 cri.go:89] found id: ""
	I1212 19:58:02.146233   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.146240   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:02.146264   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:02.146274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:02.203137   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:02.203156   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:02.214269   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:02.214290   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:02.282468   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:02.282477   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:02.282490   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:02.345078   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:02.345096   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:04.874398   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:04.884418   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:04.884477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:04.913524   54219 cri.go:89] found id: ""
	I1212 19:58:04.913537   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.913544   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:04.913596   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:04.913656   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:04.941905   54219 cri.go:89] found id: ""
	I1212 19:58:04.941919   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.941925   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:04.941930   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:04.941988   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:04.969529   54219 cri.go:89] found id: ""
	I1212 19:58:04.969549   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.969556   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:04.969561   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:04.969619   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:04.998159   54219 cri.go:89] found id: ""
	I1212 19:58:04.998173   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.998180   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:04.998185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:04.998241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:05.027027   54219 cri.go:89] found id: ""
	I1212 19:58:05.027042   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.027052   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:05.027057   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:05.027159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:05.053821   54219 cri.go:89] found id: ""
	I1212 19:58:05.053834   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.053841   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:05.053847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:05.053903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:05.078817   54219 cri.go:89] found id: ""
	I1212 19:58:05.078831   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.078837   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:05.078845   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:05.078856   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:05.137908   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:05.137927   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:05.149843   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:05.149859   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:05.216435   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:05.216444   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:05.216454   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:05.281451   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:05.281469   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:07.809177   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:07.819079   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:07.819135   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:07.843677   54219 cri.go:89] found id: ""
	I1212 19:58:07.843691   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.843698   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:07.843703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:07.843763   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:07.873172   54219 cri.go:89] found id: ""
	I1212 19:58:07.873185   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.873192   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:07.873197   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:07.873251   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:07.898060   54219 cri.go:89] found id: ""
	I1212 19:58:07.898082   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.898090   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:07.898099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:07.898157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:07.922099   54219 cri.go:89] found id: ""
	I1212 19:58:07.922113   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.922120   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:07.922131   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:07.922186   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:07.951267   54219 cri.go:89] found id: ""
	I1212 19:58:07.951281   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.951287   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:07.951292   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:07.951350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:07.979301   54219 cri.go:89] found id: ""
	I1212 19:58:07.979315   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.979322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:07.979327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:07.979383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:08.016405   54219 cri.go:89] found id: ""
	I1212 19:58:08.016418   54219 logs.go:282] 0 containers: []
	W1212 19:58:08.016425   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:08.016433   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:08.016444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:08.027858   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:08.027875   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:08.095861   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:08.095872   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:08.095885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:08.159001   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:08.159019   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:08.186794   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:08.186812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.744419   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:10.755144   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:10.755202   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:10.778581   54219 cri.go:89] found id: ""
	I1212 19:58:10.778594   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.778601   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:10.778607   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:10.778663   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:10.802768   54219 cri.go:89] found id: ""
	I1212 19:58:10.802781   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.802787   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:10.802792   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:10.802850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:10.828295   54219 cri.go:89] found id: ""
	I1212 19:58:10.828309   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.828316   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:10.828321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:10.828374   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:10.851350   54219 cri.go:89] found id: ""
	I1212 19:58:10.851363   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.851370   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:10.851375   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:10.851429   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:10.879621   54219 cri.go:89] found id: ""
	I1212 19:58:10.879635   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.879641   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:10.879646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:10.879700   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:10.905108   54219 cri.go:89] found id: ""
	I1212 19:58:10.905122   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.905129   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:10.905134   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:10.905191   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:10.928365   54219 cri.go:89] found id: ""
	I1212 19:58:10.928379   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.928386   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:10.928394   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:10.928418   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.986372   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:10.986390   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:10.997450   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:10.997464   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:11.067488   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:11.067499   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:11.067510   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:11.131069   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:11.131089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:13.660595   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:13.670703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:13.670762   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:13.694211   54219 cri.go:89] found id: ""
	I1212 19:58:13.694224   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.694231   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:13.694236   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:13.694291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:13.724541   54219 cri.go:89] found id: ""
	I1212 19:58:13.724554   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.724561   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:13.724566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:13.724625   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:13.750194   54219 cri.go:89] found id: ""
	I1212 19:58:13.750207   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.750214   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:13.750219   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:13.750277   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:13.774257   54219 cri.go:89] found id: ""
	I1212 19:58:13.774271   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.774278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:13.774283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:13.774338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:13.799078   54219 cri.go:89] found id: ""
	I1212 19:58:13.799091   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.799097   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:13.799102   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:13.799158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:13.822710   54219 cri.go:89] found id: ""
	I1212 19:58:13.822724   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.822730   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:13.822735   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:13.822791   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:13.849556   54219 cri.go:89] found id: ""
	I1212 19:58:13.849570   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.849576   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:13.849584   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:13.849595   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:13.907383   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:13.907403   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:13.917866   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:13.917883   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:14.000449   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:14.000458   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:14.000477   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:14.066367   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:14.066386   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:16.594682   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:16.604845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:16.604903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:16.629471   54219 cri.go:89] found id: ""
	I1212 19:58:16.629485   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.629493   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:16.629498   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:16.629554   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:16.654890   54219 cri.go:89] found id: ""
	I1212 19:58:16.654904   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.654911   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:16.654916   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:16.654981   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:16.679283   54219 cri.go:89] found id: ""
	I1212 19:58:16.679297   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.679304   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:16.679309   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:16.679362   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:16.704043   54219 cri.go:89] found id: ""
	I1212 19:58:16.704057   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.704065   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:16.704070   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:16.704127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:16.728139   54219 cri.go:89] found id: ""
	I1212 19:58:16.728153   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.728159   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:16.728164   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:16.728225   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:16.757814   54219 cri.go:89] found id: ""
	I1212 19:58:16.757829   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.757836   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:16.757841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:16.757894   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:16.782420   54219 cri.go:89] found id: ""
	I1212 19:58:16.782433   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.782441   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:16.782448   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:16.782458   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:16.841763   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:16.841780   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:16.852845   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:16.852861   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:16.920551   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:16.920561   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:16.920572   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:16.986769   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:16.986788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:19.527987   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:19.537931   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:19.537994   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:19.561363   54219 cri.go:89] found id: ""
	I1212 19:58:19.561377   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.561383   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:19.561389   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:19.561444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:19.584696   54219 cri.go:89] found id: ""
	I1212 19:58:19.584710   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.584717   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:19.584722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:19.584783   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:19.608796   54219 cri.go:89] found id: ""
	I1212 19:58:19.608816   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.608829   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:19.608834   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:19.608888   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:19.633676   54219 cri.go:89] found id: ""
	I1212 19:58:19.633690   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.633697   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:19.633702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:19.633765   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:19.656537   54219 cri.go:89] found id: ""
	I1212 19:58:19.656550   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.656557   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:19.656562   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:19.656615   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:19.681676   54219 cri.go:89] found id: ""
	I1212 19:58:19.681689   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.681696   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:19.681701   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:19.681756   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:19.704747   54219 cri.go:89] found id: ""
	I1212 19:58:19.704761   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.704768   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:19.704775   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:19.704785   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:19.760344   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:19.760360   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:19.770729   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:19.770745   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:19.834442   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:19.834452   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:19.834462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:19.897417   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:19.897437   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:22.424308   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:22.434481   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:22.434537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:22.458765   54219 cri.go:89] found id: ""
	I1212 19:58:22.458778   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.458785   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:22.458790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:22.458844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:22.486364   54219 cri.go:89] found id: ""
	I1212 19:58:22.486378   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.486385   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:22.486403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:22.486469   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:22.518554   54219 cri.go:89] found id: ""
	I1212 19:58:22.518567   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.518575   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:22.518579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:22.518648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:22.543164   54219 cri.go:89] found id: ""
	I1212 19:58:22.543178   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.543185   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:22.543190   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:22.543266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:22.567677   54219 cri.go:89] found id: ""
	I1212 19:58:22.567691   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.567697   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:22.567702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:22.567757   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:22.594216   54219 cri.go:89] found id: ""
	I1212 19:58:22.594230   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.594237   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:22.594242   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:22.594310   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:22.622007   54219 cri.go:89] found id: ""
	I1212 19:58:22.622021   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.622028   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:22.622036   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:22.622046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:22.684696   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:22.684719   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:22.696409   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:22.696425   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:22.763719   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:22.763730   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:22.763742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:22.828220   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:22.828242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.355355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:25.367957   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:25.368041   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:25.394847   54219 cri.go:89] found id: ""
	I1212 19:58:25.394861   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.394868   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:25.394873   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:25.394928   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:25.419394   54219 cri.go:89] found id: ""
	I1212 19:58:25.419408   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.419414   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:25.419419   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:25.419477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:25.444373   54219 cri.go:89] found id: ""
	I1212 19:58:25.444386   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.444393   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:25.444398   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:25.444455   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:25.467872   54219 cri.go:89] found id: ""
	I1212 19:58:25.467886   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.467892   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:25.467897   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:25.467952   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:25.491493   54219 cri.go:89] found id: ""
	I1212 19:58:25.491507   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.491514   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:25.491519   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:25.491575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:25.515809   54219 cri.go:89] found id: ""
	I1212 19:58:25.515832   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.515864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:25.515869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:25.515939   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:25.540733   54219 cri.go:89] found id: ""
	I1212 19:58:25.540747   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.540754   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:25.540762   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:25.540773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:25.551372   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:25.551387   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:25.613099   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:25.613109   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:25.613119   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:25.674835   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:25.674854   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.702894   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:25.702910   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.260731   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:28.270423   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:28.270480   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:28.297804   54219 cri.go:89] found id: ""
	I1212 19:58:28.297818   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.297825   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:28.297830   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:28.297887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:28.322143   54219 cri.go:89] found id: ""
	I1212 19:58:28.322157   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.322164   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:28.322169   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:28.322223   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:28.346215   54219 cri.go:89] found id: ""
	I1212 19:58:28.346229   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.346236   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:28.346241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:28.346297   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:28.370542   54219 cri.go:89] found id: ""
	I1212 19:58:28.370556   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.370563   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:28.370574   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:28.370634   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:28.397655   54219 cri.go:89] found id: ""
	I1212 19:58:28.397670   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.397677   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:28.397682   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:28.397737   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:28.421548   54219 cri.go:89] found id: ""
	I1212 19:58:28.421561   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.421568   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:28.421573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:28.421627   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:28.445812   54219 cri.go:89] found id: ""
	I1212 19:58:28.445826   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.445833   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:28.445840   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:28.445850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.501608   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:28.501625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:28.513441   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:28.513494   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:28.582207   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:28.582217   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:28.582229   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:28.644833   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:28.644850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:31.174256   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:31.184503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:31.184561   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:31.220119   54219 cri.go:89] found id: ""
	I1212 19:58:31.220139   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.220147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:31.220158   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:31.220226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:31.253789   54219 cri.go:89] found id: ""
	I1212 19:58:31.253802   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.253815   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:31.253825   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:31.253884   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:31.279880   54219 cri.go:89] found id: ""
	I1212 19:58:31.279899   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.279906   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:31.279911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:31.279965   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:31.304491   54219 cri.go:89] found id: ""
	I1212 19:58:31.304504   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.304511   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:31.304515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:31.304569   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:31.331430   54219 cri.go:89] found id: ""
	I1212 19:58:31.331444   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.331451   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:31.331456   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:31.331510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:31.357552   54219 cri.go:89] found id: ""
	I1212 19:58:31.357566   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.357572   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:31.357577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:31.357633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:31.381902   54219 cri.go:89] found id: ""
	I1212 19:58:31.381916   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.381923   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:31.381930   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:31.381940   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:31.437813   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:31.437831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:31.448492   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:31.448509   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:31.513035   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:31.513045   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:31.513056   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:31.574565   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:31.574584   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:34.102253   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:34.112554   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:34.112620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:34.137461   54219 cri.go:89] found id: ""
	I1212 19:58:34.137475   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.137482   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:34.137487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:34.137541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:34.166138   54219 cri.go:89] found id: ""
	I1212 19:58:34.166161   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.166169   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:34.166174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:34.166234   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:34.193829   54219 cri.go:89] found id: ""
	I1212 19:58:34.193842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.193849   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:34.193854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:34.193906   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:34.231695   54219 cri.go:89] found id: ""
	I1212 19:58:34.231708   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.231716   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:34.231721   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:34.231777   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:34.264331   54219 cri.go:89] found id: ""
	I1212 19:58:34.264344   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.264351   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:34.264356   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:34.264412   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:34.288829   54219 cri.go:89] found id: ""
	I1212 19:58:34.288842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.288849   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:34.288854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:34.288908   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:34.316442   54219 cri.go:89] found id: ""
	I1212 19:58:34.316456   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.316463   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:34.316471   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:34.316481   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:34.376058   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:34.376076   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:34.386998   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:34.387013   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:34.452379   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:34.452390   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:34.452401   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:34.514653   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:34.514671   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:37.042798   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:37.053097   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:37.053156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:37.076591   54219 cri.go:89] found id: ""
	I1212 19:58:37.076604   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.076611   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:37.076616   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:37.076674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:37.099322   54219 cri.go:89] found id: ""
	I1212 19:58:37.099335   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.099342   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:37.099348   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:37.099402   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:37.123234   54219 cri.go:89] found id: ""
	I1212 19:58:37.123248   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.123255   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:37.123260   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:37.123314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:37.147746   54219 cri.go:89] found id: ""
	I1212 19:58:37.147760   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.147767   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:37.147772   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:37.147827   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:37.173059   54219 cri.go:89] found id: ""
	I1212 19:58:37.173072   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.173079   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:37.173084   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:37.173141   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:37.208173   54219 cri.go:89] found id: ""
	I1212 19:58:37.208192   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.208199   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:37.208204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:37.208263   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:37.239049   54219 cri.go:89] found id: ""
	I1212 19:58:37.239063   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.239070   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:37.239078   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:37.239088   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:37.297849   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:37.297866   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:37.309078   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:37.309092   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:37.375029   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:37.375038   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:37.375050   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:37.436797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:37.436815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:39.970179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:39.980227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:39.980293   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:40.004882   54219 cri.go:89] found id: ""
	I1212 19:58:40.004896   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.004903   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:40.004907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:40.004963   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:40.066617   54219 cri.go:89] found id: ""
	I1212 19:58:40.066632   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.066640   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:40.066645   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:40.066717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:40.102654   54219 cri.go:89] found id: ""
	I1212 19:58:40.102669   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.102676   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:40.102681   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:40.102745   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:40.133625   54219 cri.go:89] found id: ""
	I1212 19:58:40.133640   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.133648   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:40.133654   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:40.133723   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:40.166821   54219 cri.go:89] found id: ""
	I1212 19:58:40.166845   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.166853   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:40.166858   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:40.166927   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:40.195477   54219 cri.go:89] found id: ""
	I1212 19:58:40.195500   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.195509   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:40.195515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:40.195580   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:40.225935   54219 cri.go:89] found id: ""
	I1212 19:58:40.225949   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.225967   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:40.225976   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:40.225986   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:40.302829   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:40.302839   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:40.302850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:40.365532   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:40.365552   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:40.400282   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:40.400298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:40.460370   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:40.460389   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:42.971593   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:42.981866   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:42.981931   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:43.006660   54219 cri.go:89] found id: ""
	I1212 19:58:43.006674   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.006690   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:43.006696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:43.006753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:43.033557   54219 cri.go:89] found id: ""
	I1212 19:58:43.033571   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.033578   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:43.033583   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:43.033643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:43.062054   54219 cri.go:89] found id: ""
	I1212 19:58:43.062067   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.062073   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:43.062078   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:43.062139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:43.086826   54219 cri.go:89] found id: ""
	I1212 19:58:43.086841   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.086849   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:43.086854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:43.086920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:43.112001   54219 cri.go:89] found id: ""
	I1212 19:58:43.112015   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.112022   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:43.112027   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:43.112099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:43.137727   54219 cri.go:89] found id: ""
	I1212 19:58:43.137741   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.137748   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:43.137753   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:43.137811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:43.163693   54219 cri.go:89] found id: ""
	I1212 19:58:43.163707   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.163714   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:43.163731   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:43.163742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:43.174602   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:43.174617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:43.254196   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:43.254213   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:43.254224   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:43.321187   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:43.321206   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:43.353090   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:43.353105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:45.910450   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:45.920312   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:45.920373   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:45.942607   54219 cri.go:89] found id: ""
	I1212 19:58:45.942620   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.942627   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:45.942632   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:45.942688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:45.966155   54219 cri.go:89] found id: ""
	I1212 19:58:45.966168   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.966175   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:45.966179   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:45.966235   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:45.989218   54219 cri.go:89] found id: ""
	I1212 19:58:45.989232   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.989239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:45.989243   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:45.989298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:46.016207   54219 cri.go:89] found id: ""
	I1212 19:58:46.016222   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.016228   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:46.016234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:46.016291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:46.045554   54219 cri.go:89] found id: ""
	I1212 19:58:46.045569   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.045576   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:46.045581   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:46.045635   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:46.069843   54219 cri.go:89] found id: ""
	I1212 19:58:46.069856   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.069865   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:46.069870   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:46.069924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:46.093840   54219 cri.go:89] found id: ""
	I1212 19:58:46.093854   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.093860   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:46.093869   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:46.093878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:46.149331   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:46.149349   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:46.159907   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:46.159924   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:46.230481   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:46.230490   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:46.230502   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:46.300039   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:46.300060   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:48.829920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:48.840025   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:48.840080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:48.869553   54219 cri.go:89] found id: ""
	I1212 19:58:48.869567   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.869574   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:48.869579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:48.869633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:48.894185   54219 cri.go:89] found id: ""
	I1212 19:58:48.894199   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.894205   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:48.894220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:48.894280   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:48.918726   54219 cri.go:89] found id: ""
	I1212 19:58:48.918740   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.918752   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:48.918757   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:48.918814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:48.943092   54219 cri.go:89] found id: ""
	I1212 19:58:48.943106   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.943113   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:48.943118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:48.943172   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:48.967616   54219 cri.go:89] found id: ""
	I1212 19:58:48.967630   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.967637   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:48.967642   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:48.967697   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:48.993271   54219 cri.go:89] found id: ""
	I1212 19:58:48.993284   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.993291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:48.993296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:48.993355   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:49.018337   54219 cri.go:89] found id: ""
	I1212 19:58:49.018359   54219 logs.go:282] 0 containers: []
	W1212 19:58:49.018376   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:49.018386   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:49.018395   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:49.074620   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:49.074637   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:49.085360   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:49.085378   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:49.147253   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:49.147263   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:49.147274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:49.215977   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:49.215996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:51.751688   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:51.761744   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:51.761806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:51.790227   54219 cri.go:89] found id: ""
	I1212 19:58:51.790241   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.790248   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:51.790253   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:51.790309   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:51.817250   54219 cri.go:89] found id: ""
	I1212 19:58:51.817264   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.817271   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:51.817276   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:51.817346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:51.842832   54219 cri.go:89] found id: ""
	I1212 19:58:51.842845   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.842851   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:51.842856   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:51.842916   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:51.867234   54219 cri.go:89] found id: ""
	I1212 19:58:51.867249   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.867256   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:51.867261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:51.867315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:51.895349   54219 cri.go:89] found id: ""
	I1212 19:58:51.895364   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.895371   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:51.895376   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:51.895432   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:51.920577   54219 cri.go:89] found id: ""
	I1212 19:58:51.920594   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.920603   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:51.920612   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:51.920674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:51.945231   54219 cri.go:89] found id: ""
	I1212 19:58:51.945244   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.945251   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:51.945258   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:51.945268   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:52.004677   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:52.004694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:52.018082   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:52.018098   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:52.085848   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:52.085859   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:52.085869   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:52.155168   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:52.155196   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.685430   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:54.695280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:54.695335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:54.720974   54219 cri.go:89] found id: ""
	I1212 19:58:54.720988   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.720994   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:54.721001   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:54.721063   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:54.744863   54219 cri.go:89] found id: ""
	I1212 19:58:54.744876   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.744883   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:54.744888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:54.744943   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:54.768441   54219 cri.go:89] found id: ""
	I1212 19:58:54.768454   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.768461   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:54.768465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:54.768520   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:54.797540   54219 cri.go:89] found id: ""
	I1212 19:58:54.797554   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.797561   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:54.797566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:54.797633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:54.825756   54219 cri.go:89] found id: ""
	I1212 19:58:54.825770   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.825776   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:54.825782   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:54.825850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:54.853837   54219 cri.go:89] found id: ""
	I1212 19:58:54.853850   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.853857   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:54.853867   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:54.853921   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:54.880854   54219 cri.go:89] found id: ""
	I1212 19:58:54.880868   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.880874   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:54.880882   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:54.880892   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.908639   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:54.908655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:54.965093   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:54.965111   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:54.976121   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:54.976137   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:55.044063   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:55.044074   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:55.044085   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:57.606891   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:57.617246   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:57.617305   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:57.641248   54219 cri.go:89] found id: ""
	I1212 19:58:57.641261   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.641269   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:57.641274   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:57.641336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:57.666129   54219 cri.go:89] found id: ""
	I1212 19:58:57.666160   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.666167   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:57.666171   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:57.666226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:57.690889   54219 cri.go:89] found id: ""
	I1212 19:58:57.690902   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.690913   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:57.690918   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:57.690974   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:57.719998   54219 cri.go:89] found id: ""
	I1212 19:58:57.720012   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.720019   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:57.720024   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:57.720080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:57.745021   54219 cri.go:89] found id: ""
	I1212 19:58:57.745034   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.745041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:57.745046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:57.745102   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:57.769302   54219 cri.go:89] found id: ""
	I1212 19:58:57.769316   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.769322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:57.769327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:57.769383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:57.792874   54219 cri.go:89] found id: ""
	I1212 19:58:57.792887   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.792894   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:57.792902   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:57.792913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:57.821987   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:57.822003   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:57.878403   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:57.878420   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:57.889240   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:57.889255   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:57.955924   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:57.955936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:57.955948   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.519976   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:00.530412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:00.530471   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:00.562296   54219 cri.go:89] found id: ""
	I1212 19:59:00.562309   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.562316   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:00.562321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:00.562381   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:00.590126   54219 cri.go:89] found id: ""
	I1212 19:59:00.590140   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.590147   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:00.590152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:00.590208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:00.618262   54219 cri.go:89] found id: ""
	I1212 19:59:00.618276   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.618282   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:00.618287   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:00.618350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:00.643416   54219 cri.go:89] found id: ""
	I1212 19:59:00.643430   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.643437   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:00.643442   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:00.643497   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:00.668447   54219 cri.go:89] found id: ""
	I1212 19:59:00.668461   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.668469   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:00.668474   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:00.668534   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:00.695735   54219 cri.go:89] found id: ""
	I1212 19:59:00.695748   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.695755   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:00.695760   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:00.695820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:00.729197   54219 cri.go:89] found id: ""
	I1212 19:59:00.729211   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.729219   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:00.729226   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:00.729237   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:00.739980   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:00.739996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:00.812904   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:00.812914   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:00.812925   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.876760   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:00.876778   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:00.905954   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:00.905970   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.466026   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:03.476441   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:03.476505   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:03.512755   54219 cri.go:89] found id: ""
	I1212 19:59:03.512774   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.512781   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:03.512786   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:03.512844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:03.536972   54219 cri.go:89] found id: ""
	I1212 19:59:03.536992   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.536999   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:03.537004   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:03.537071   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:03.564981   54219 cri.go:89] found id: ""
	I1212 19:59:03.564995   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.565002   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:03.565006   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:03.565061   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:03.589258   54219 cri.go:89] found id: ""
	I1212 19:59:03.589271   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.589278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:03.589283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:03.589335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:03.617627   54219 cri.go:89] found id: ""
	I1212 19:59:03.617649   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.617656   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:03.617661   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:03.617724   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:03.643124   54219 cri.go:89] found id: ""
	I1212 19:59:03.643137   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.643144   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:03.643149   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:03.643205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:03.667587   54219 cri.go:89] found id: ""
	I1212 19:59:03.667601   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.667607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:03.667615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:03.667624   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.724310   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:03.724326   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:03.735089   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:03.735105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:03.799034   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:03.799043   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:03.799054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:03.861867   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:03.861885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:06.393541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:06.403453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:06.403511   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:06.427440   54219 cri.go:89] found id: ""
	I1212 19:59:06.427454   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.427460   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:06.427465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:06.427524   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:06.457341   54219 cri.go:89] found id: ""
	I1212 19:59:06.457355   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.457361   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:06.457366   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:06.457424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:06.495095   54219 cri.go:89] found id: ""
	I1212 19:59:06.495110   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.495116   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:06.495122   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:06.495179   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:06.522006   54219 cri.go:89] found id: ""
	I1212 19:59:06.522041   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.522048   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:06.522053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:06.522111   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:06.551005   54219 cri.go:89] found id: ""
	I1212 19:59:06.551019   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.551026   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:06.551031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:06.551099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:06.576063   54219 cri.go:89] found id: ""
	I1212 19:59:06.576089   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.576096   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:06.576101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:06.576157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:06.601543   54219 cri.go:89] found id: ""
	I1212 19:59:06.601557   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.601565   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:06.601572   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:06.601582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:06.657957   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:06.657977   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:06.668650   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:06.668665   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:06.730730   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:06.730739   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:06.730749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:06.793201   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:06.793219   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:09.321790   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:09.332762   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:09.332820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:09.359927   54219 cri.go:89] found id: ""
	I1212 19:59:09.359941   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.359948   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:09.359953   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:09.360026   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:09.385111   54219 cri.go:89] found id: ""
	I1212 19:59:09.385125   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.385137   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:09.385142   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:09.385201   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:09.416991   54219 cri.go:89] found id: ""
	I1212 19:59:09.417006   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.417013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:09.417018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:09.417077   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:09.442593   54219 cri.go:89] found id: ""
	I1212 19:59:09.442606   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.442612   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:09.442617   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:09.442672   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:09.469724   54219 cri.go:89] found id: ""
	I1212 19:59:09.469738   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.469745   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:09.469750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:09.469806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:09.506134   54219 cri.go:89] found id: ""
	I1212 19:59:09.506148   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.506154   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:09.506160   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:09.506226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:09.537548   54219 cri.go:89] found id: ""
	I1212 19:59:09.537561   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.537568   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:09.537576   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:09.537585   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:09.596110   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:09.596128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:09.607356   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:09.607373   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:09.678885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:09.678895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:09.678906   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:09.744120   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:09.744138   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:12.273229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:12.283400   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:12.283456   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:12.307126   54219 cri.go:89] found id: ""
	I1212 19:59:12.307140   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.307147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:12.307152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:12.307208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:12.333237   54219 cri.go:89] found id: ""
	I1212 19:59:12.333250   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.333257   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:12.333261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:12.333318   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:12.357336   54219 cri.go:89] found id: ""
	I1212 19:59:12.357349   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.357356   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:12.357361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:12.357416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:12.382066   54219 cri.go:89] found id: ""
	I1212 19:59:12.382080   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.382086   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:12.382091   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:12.382147   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:12.406069   54219 cri.go:89] found id: ""
	I1212 19:59:12.406082   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.406089   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:12.406094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:12.406149   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:12.434345   54219 cri.go:89] found id: ""
	I1212 19:59:12.434365   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.434372   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:12.434377   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:12.434457   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:12.466422   54219 cri.go:89] found id: ""
	I1212 19:59:12.466436   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.466444   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:12.466451   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:12.466462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:12.528768   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:12.528787   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:12.541490   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:12.541508   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:12.602589   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:12.602599   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:12.602609   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:12.664894   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:12.664913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:15.192235   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:15.202664   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:15.202722   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:15.227464   54219 cri.go:89] found id: ""
	I1212 19:59:15.227477   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.227484   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:15.227489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:15.227545   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:15.251075   54219 cri.go:89] found id: ""
	I1212 19:59:15.251089   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.251096   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:15.251101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:15.251156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:15.275993   54219 cri.go:89] found id: ""
	I1212 19:59:15.276006   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.276013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:15.276018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:15.276075   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:15.299883   54219 cri.go:89] found id: ""
	I1212 19:59:15.299896   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.299903   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:15.299908   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:15.299961   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:15.324623   54219 cri.go:89] found id: ""
	I1212 19:59:15.324636   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.324642   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:15.324647   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:15.324702   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:15.350461   54219 cri.go:89] found id: ""
	I1212 19:59:15.350474   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.350481   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:15.350486   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:15.350541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:15.375380   54219 cri.go:89] found id: ""
	I1212 19:59:15.375407   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.375415   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:15.375423   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:15.375434   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:15.431649   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:15.431669   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:15.444811   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:15.444836   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:15.537885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:15.537895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:15.537908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:15.604300   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:15.604319   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:18.136615   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:18.146971   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:18.147036   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:18.176330   54219 cri.go:89] found id: ""
	I1212 19:59:18.176344   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.176351   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:18.176359   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:18.176416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:18.200844   54219 cri.go:89] found id: ""
	I1212 19:59:18.200857   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.200863   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:18.200868   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:18.200924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:18.224026   54219 cri.go:89] found id: ""
	I1212 19:59:18.224040   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.224046   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:18.224051   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:18.224107   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:18.252073   54219 cri.go:89] found id: ""
	I1212 19:59:18.252086   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.252093   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:18.252098   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:18.252153   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:18.277440   54219 cri.go:89] found id: ""
	I1212 19:59:18.277454   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.277460   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:18.277465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:18.277521   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:18.302183   54219 cri.go:89] found id: ""
	I1212 19:59:18.302197   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.302214   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:18.302220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:18.302286   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:18.326037   54219 cri.go:89] found id: ""
	I1212 19:59:18.326058   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.326065   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:18.326073   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:18.326083   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:18.380825   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:18.380843   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:18.391618   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:18.391634   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:18.463287   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:18.463297   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:18.463309   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:18.536948   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:18.536967   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:21.064758   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:21.074846   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:21.074903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:21.099031   54219 cri.go:89] found id: ""
	I1212 19:59:21.099044   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.099051   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:21.099056   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:21.099109   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:21.123108   54219 cri.go:89] found id: ""
	I1212 19:59:21.123121   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.123127   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:21.123132   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:21.123187   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:21.146869   54219 cri.go:89] found id: ""
	I1212 19:59:21.146883   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.146890   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:21.146895   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:21.146964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:21.171309   54219 cri.go:89] found id: ""
	I1212 19:59:21.171323   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.171329   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:21.171340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:21.171395   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:21.195200   54219 cri.go:89] found id: ""
	I1212 19:59:21.195213   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.195219   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:21.195224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:21.195282   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:21.218648   54219 cri.go:89] found id: ""
	I1212 19:59:21.218661   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.218668   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:21.218673   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:21.218726   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:21.243375   54219 cri.go:89] found id: ""
	I1212 19:59:21.243388   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.243395   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:21.243402   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:21.243411   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:21.299185   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:21.299202   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:21.309826   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:21.309840   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:21.373437   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:21.373447   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:21.373457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:21.435817   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:21.435878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:23.968994   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:23.978907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:23.978964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:24.004005   54219 cri.go:89] found id: ""
	I1212 19:59:24.004018   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.004025   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:24.004030   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:24.004085   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:24.031561   54219 cri.go:89] found id: ""
	I1212 19:59:24.031576   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.031583   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:24.031588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:24.031648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:24.058089   54219 cri.go:89] found id: ""
	I1212 19:59:24.058105   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.058113   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:24.058120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:24.058183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:24.083693   54219 cri.go:89] found id: ""
	I1212 19:59:24.083707   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.083713   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:24.083718   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:24.083774   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:24.110732   54219 cri.go:89] found id: ""
	I1212 19:59:24.110746   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.110753   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:24.110758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:24.110814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:24.135252   54219 cri.go:89] found id: ""
	I1212 19:59:24.135266   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.135273   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:24.135278   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:24.135330   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:24.158751   54219 cri.go:89] found id: ""
	I1212 19:59:24.158765   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.158771   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:24.158779   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:24.158788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:24.188496   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:24.188513   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:24.244683   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:24.244701   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:24.255424   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:24.255440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:24.324102   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:24.324113   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:24.324126   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:26.896008   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:26.906451   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:26.906508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:26.930525   54219 cri.go:89] found id: ""
	I1212 19:59:26.930538   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.930546   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:26.930551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:26.930607   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:26.954197   54219 cri.go:89] found id: ""
	I1212 19:59:26.954212   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.954219   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:26.954224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:26.954284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:26.978362   54219 cri.go:89] found id: ""
	I1212 19:59:26.978375   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.978381   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:26.978388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:26.978444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:27.003156   54219 cri.go:89] found id: ""
	I1212 19:59:27.003170   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.003177   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:27.003182   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:27.003241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:27.035090   54219 cri.go:89] found id: ""
	I1212 19:59:27.035103   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.035110   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:27.035115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:27.035170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:27.059270   54219 cri.go:89] found id: ""
	I1212 19:59:27.059284   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.059291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:27.059296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:27.059351   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:27.083068   54219 cri.go:89] found id: ""
	I1212 19:59:27.083081   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.083088   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:27.083096   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:27.083105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:27.138962   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:27.138979   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:27.149646   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:27.149662   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:27.216025   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:27.216036   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:27.216046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:27.277808   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:27.277826   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:29.806087   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:29.816453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:29.816508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:29.839921   54219 cri.go:89] found id: ""
	I1212 19:59:29.839935   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.839943   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:29.839950   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:29.840023   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:29.868215   54219 cri.go:89] found id: ""
	I1212 19:59:29.868229   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.868236   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:29.868241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:29.868298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:29.892199   54219 cri.go:89] found id: ""
	I1212 19:59:29.892212   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.892219   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:29.892226   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:29.892281   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:29.921316   54219 cri.go:89] found id: ""
	I1212 19:59:29.921330   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.921336   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:29.921351   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:29.921415   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:29.946039   54219 cri.go:89] found id: ""
	I1212 19:59:29.946053   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.946059   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:29.946064   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:29.946125   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:29.976514   54219 cri.go:89] found id: ""
	I1212 19:59:29.976528   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.976536   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:29.976541   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:29.976601   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:30.000755   54219 cri.go:89] found id: ""
	I1212 19:59:30.000768   54219 logs.go:282] 0 containers: []
	W1212 19:59:30.000775   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:30.000783   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:30.000793   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:30.058301   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:30.058321   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:30.070295   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:30.070312   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:30.139764   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:30.139775   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:30.139786   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:30.203348   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:30.203371   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:32.732603   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:32.743210   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:32.743266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:32.775589   54219 cri.go:89] found id: ""
	I1212 19:59:32.775603   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.775610   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:32.775614   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:32.775673   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:32.799717   54219 cri.go:89] found id: ""
	I1212 19:59:32.799730   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.799737   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:32.799742   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:32.799801   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:32.826819   54219 cri.go:89] found id: ""
	I1212 19:59:32.826832   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.826839   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:32.826844   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:32.826902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:32.851752   54219 cri.go:89] found id: ""
	I1212 19:59:32.851765   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.851772   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:32.851777   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:32.851832   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:32.876003   54219 cri.go:89] found id: ""
	I1212 19:59:32.876017   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.876024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:32.876035   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:32.876093   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:32.902460   54219 cri.go:89] found id: ""
	I1212 19:59:32.902474   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.902480   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:32.902504   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:32.902560   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:32.925773   54219 cri.go:89] found id: ""
	I1212 19:59:32.925787   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.925793   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:32.925802   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:32.925812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:32.936160   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:32.936177   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:33.000494   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:33.000505   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:33.000515   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:33.066244   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:33.066264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:33.096113   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:33.096128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:35.653289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:35.663651   54219 kubeadm.go:602] duration metric: took 4m3.519380388s to restartPrimaryControlPlane
	W1212 19:59:35.663714   54219 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 19:59:35.663796   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 19:59:36.078838   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:59:36.092917   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:59:36.101391   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:59:36.101446   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:59:36.109781   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:59:36.109792   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 19:59:36.109842   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:59:36.118044   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:59:36.118100   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:59:36.125732   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:59:36.133647   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:59:36.133711   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:59:36.141349   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.149338   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:59:36.149401   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.156798   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:59:36.164406   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:59:36.164460   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:59:36.171816   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:59:36.215707   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:59:36.215925   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:59:36.287068   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:59:36.287132   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 19:59:36.287172   54219 kubeadm.go:319] OS: Linux
	I1212 19:59:36.287216   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:59:36.287263   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:59:36.287309   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:59:36.287356   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:59:36.287415   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:59:36.287462   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:59:36.287505   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:59:36.287552   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:59:36.287596   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:59:36.350092   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:59:36.350201   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:59:36.350291   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:59:36.357029   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:59:36.360551   54219 out.go:252]   - Generating certificates and keys ...
	I1212 19:59:36.360649   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:59:36.360718   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:59:36.360805   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 19:59:36.360872   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 19:59:36.360946   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 19:59:36.361003   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 19:59:36.361117   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 19:59:36.361314   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 19:59:36.361808   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 19:59:36.362227   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 19:59:36.362588   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 19:59:36.362716   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:59:36.513194   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:59:36.762182   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:59:37.087768   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:59:37.827220   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:59:38.025150   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:59:38.026038   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:59:38.030783   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:59:38.034177   54219 out.go:252]   - Booting up control plane ...
	I1212 19:59:38.034305   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:59:38.035144   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:59:38.036428   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:59:38.058524   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:59:38.058720   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:59:38.067348   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:59:38.067823   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:59:38.067969   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:59:38.202645   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:59:38.202775   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:03:38.203202   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000900998s
	I1212 20:03:38.203226   54219 kubeadm.go:319] 
	I1212 20:03:38.203283   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:03:38.203315   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:03:38.203419   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:03:38.203424   54219 kubeadm.go:319] 
	I1212 20:03:38.203527   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:03:38.203558   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:03:38.203588   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:03:38.203591   54219 kubeadm.go:319] 
	I1212 20:03:38.208746   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:03:38.209173   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:03:38.209280   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:03:38.209544   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:03:38.209548   54219 kubeadm.go:319] 
	I1212 20:03:38.209616   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:03:38.209718   54219 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000900998s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:03:38.209803   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 20:03:38.624272   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:03:38.637409   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:03:38.637464   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:03:38.645037   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:03:38.645047   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 20:03:38.645093   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:03:38.652503   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:03:38.652568   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:03:38.659596   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:03:38.667127   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:03:38.667190   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:03:38.674737   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.682321   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:03:38.682373   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.689635   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:03:38.696927   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:03:38.696978   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:03:38.704097   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:03:38.743640   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:03:38.743913   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:03:38.814950   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:03:38.815010   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:03:38.815042   54219 kubeadm.go:319] OS: Linux
	I1212 20:03:38.815098   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:03:38.815149   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:03:38.815192   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:03:38.815236   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:03:38.815280   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:03:38.815324   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:03:38.815365   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:03:38.815409   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:03:38.815451   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:03:38.887100   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:03:38.887197   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:03:38.887281   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:03:38.896370   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:03:38.901736   54219 out.go:252]   - Generating certificates and keys ...
	I1212 20:03:38.901817   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:03:38.901877   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:03:38.901950   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:03:38.902007   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:03:38.902071   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:03:38.902127   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:03:38.902186   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:03:38.902243   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:03:38.902321   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:03:38.902389   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:03:38.902423   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:03:38.902476   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:03:39.125808   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:03:39.338381   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:03:39.401460   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:03:39.625424   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:03:39.783055   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:03:39.783603   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:03:39.786147   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:03:39.789268   54219 out.go:252]   - Booting up control plane ...
	I1212 20:03:39.789370   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:03:39.789458   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:03:39.790103   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:03:39.810111   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:03:39.810207   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:03:39.818331   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:03:39.818818   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:03:39.818950   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:03:39.956538   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:03:39.956645   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:07:39.951298   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001147362s
	I1212 20:07:39.951324   54219 kubeadm.go:319] 
	I1212 20:07:39.951381   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:07:39.951413   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:07:39.951517   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:07:39.951522   54219 kubeadm.go:319] 
	I1212 20:07:39.951625   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:07:39.951656   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:07:39.951686   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:07:39.951689   54219 kubeadm.go:319] 
	I1212 20:07:39.955566   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:07:39.956028   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:07:39.956162   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:07:39.956426   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:07:39.956433   54219 kubeadm.go:319] 
	I1212 20:07:39.956501   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:07:39.956558   54219 kubeadm.go:403] duration metric: took 12m7.846093292s to StartCluster
	I1212 20:07:39.956588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:07:39.956652   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:07:39.984872   54219 cri.go:89] found id: ""
	I1212 20:07:39.984887   54219 logs.go:282] 0 containers: []
	W1212 20:07:39.984894   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:39.984900   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:07:39.984958   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:07:40.008408   54219 cri.go:89] found id: ""
	I1212 20:07:40.008426   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.008433   54219 logs.go:284] No container was found matching "etcd"
	I1212 20:07:40.008439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:07:40.008502   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:07:40.051885   54219 cri.go:89] found id: ""
	I1212 20:07:40.051899   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.051906   54219 logs.go:284] No container was found matching "coredns"
	I1212 20:07:40.051911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:07:40.051971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:07:40.078448   54219 cri.go:89] found id: ""
	I1212 20:07:40.078462   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.078469   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:40.078473   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:07:40.078533   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:07:40.105530   54219 cri.go:89] found id: ""
	I1212 20:07:40.105555   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.105562   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:40.105568   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:07:40.105632   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:07:40.134868   54219 cri.go:89] found id: ""
	I1212 20:07:40.134884   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.134911   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:40.134917   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:07:40.134977   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:07:40.160769   54219 cri.go:89] found id: ""
	I1212 20:07:40.160782   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.160789   54219 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:40.160798   54219 logs.go:123] Gathering logs for container status ...
	I1212 20:07:40.160808   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:40.187973   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:40.187990   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:40.250924   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:40.250942   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:40.266149   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:40.266165   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:40.328697   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:40.328707   54219 logs.go:123] Gathering logs for containerd ...
	I1212 20:07:40.328717   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 20:07:40.395302   54219 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:07:40.395340   54219 out.go:285] * 
	W1212 20:07:40.395406   54219 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.395426   54219 out.go:285] * 
	W1212 20:07:40.397542   54219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:07:40.403271   54219 out.go:203] 
	W1212 20:07:40.407023   54219 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.407080   54219 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:07:40.407103   54219 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:07:40.410913   54219 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409212463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409233845Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409270693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409288186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409297991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409313604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409322646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409334633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409357730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409389073Z" level=info msg="Connect containerd service"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409646705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.410157440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.430913489Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431535088Z" level=info msg="Start recovering state"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431784515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431871117Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469097271Z" level=info msg="Start event monitor"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469264239Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469333685Z" level=info msg="Start streaming server"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469389199Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469443014Z" level=info msg="runtime interface starting up..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469502196Z" level=info msg="starting plugins..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469562690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:55:30 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.471989321Z" level=info msg="containerd successfully booted in 0.083546s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:12.277282   23236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:12.278065   23236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:12.279996   23236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:12.280678   23236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:12.282261   23236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:10:12 up 52 min,  0 user,  load average: 0.41, 0.34, 0.39
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:10:08 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:09 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 12 20:10:09 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:09 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:09 functional-384006 kubelet[23059]: E1212 20:10:09.509950   23059 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:09 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:09 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:10 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 520.
	Dec 12 20:10:10 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:10 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:10 functional-384006 kubelet[23082]: E1212 20:10:10.249836   23082 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:10 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:10 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:10 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 521.
	Dec 12 20:10:10 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:10 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:11 functional-384006 kubelet[23124]: E1212 20:10:11.000690   23124 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:11 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:11 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:11 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 522.
	Dec 12 20:10:11 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:11 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:11 functional-384006 kubelet[23152]: E1212 20:10:11.763052   23152 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:11 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:11 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (337.257175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-384006 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-384006 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (50.521922ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-384006 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-384006 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-384006 describe po hello-node-connect: exit status 1 (58.841947ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-384006 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-384006 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-384006 logs -l app=hello-node-connect: exit status 1 (56.355403ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-384006 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-384006 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-384006 describe svc hello-node-connect: exit status 1 (57.746067ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-384006 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (321.05666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-384006 cache reload                                                                                                                               │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ ssh     │ functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │ 12 Dec 25 19:55 UTC │
	│ kubectl │ functional-384006 kubectl -- --context functional-384006 get pods                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ start   │ -p functional-384006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:55 UTC │                     │
	│ config  │ functional-384006 config unset cpus                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ cp      │ functional-384006 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ config  │ functional-384006 config get cpus                                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │                     │
	│ config  │ functional-384006 config set cpus 2                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ config  │ functional-384006 config get cpus                                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ config  │ functional-384006 config unset cpus                                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ ssh     │ functional-384006 ssh -n functional-384006 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ config  │ functional-384006 config get cpus                                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │                     │
	│ ssh     │ functional-384006 ssh echo hello                                                                                                                             │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ cp      │ functional-384006 cp functional-384006:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3208660650/001/cp-test.txt │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ ssh     │ functional-384006 ssh cat /etc/hostname                                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ ssh     │ functional-384006 ssh -n functional-384006 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ tunnel  │ functional-384006 tunnel --alsologtostderr                                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │                     │
	│ tunnel  │ functional-384006 tunnel --alsologtostderr                                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │                     │
	│ cp      │ functional-384006 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ tunnel  │ functional-384006 tunnel --alsologtostderr                                                                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │                     │
	│ ssh     │ functional-384006 ssh -n functional-384006 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:07 UTC │ 12 Dec 25 20:07 UTC │
	│ addons  │ functional-384006 addons list                                                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ functional-384006 addons list -o json                                                                                                                        │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:55:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:55:27.852724   54219 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:55:27.853298   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853302   54219 out.go:374] Setting ErrFile to fd 2...
	I1212 19:55:27.853307   54219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:55:27.853572   54219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:55:27.853965   54219 out.go:368] Setting JSON to false
	I1212 19:55:27.854729   54219 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2277,"bootTime":1765567051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:55:27.854784   54219 start.go:143] virtualization:  
	I1212 19:55:27.858422   54219 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:55:27.861585   54219 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:55:27.861670   54219 notify.go:221] Checking for updates...
	I1212 19:55:27.868224   54219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:55:27.871239   54219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:55:27.874218   54219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:55:27.877241   54219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:55:27.880290   54219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:55:27.883683   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:27.883824   54219 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:55:27.904994   54219 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:55:27.905107   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:27.972320   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:27.96314904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:27.972416   54219 docker.go:319] overlay module found
	I1212 19:55:27.975641   54219 out.go:179] * Using the docker driver based on existing profile
	I1212 19:55:27.978549   54219 start.go:309] selected driver: docker
	I1212 19:55:27.978557   54219 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:27.978631   54219 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:55:27.978726   54219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:55:28.035973   54219 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 19:55:28.026224666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:55:28.036393   54219 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:55:28.036415   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:28.036463   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:28.036537   54219 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:28.039865   54219 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
	I1212 19:55:28.042798   54219 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:55:28.046082   54219 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:55:28.048968   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:28.049006   54219 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:55:28.049015   54219 cache.go:65] Caching tarball of preloaded images
	I1212 19:55:28.049057   54219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:55:28.049116   54219 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 19:55:28.049125   54219 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 19:55:28.049240   54219 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
	I1212 19:55:28.070140   54219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 19:55:28.070152   54219 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 19:55:28.070172   54219 cache.go:243] Successfully downloaded all kic artifacts
	I1212 19:55:28.070201   54219 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:55:28.070267   54219 start.go:364] duration metric: took 47.145µs to acquireMachinesLock for "functional-384006"
	I1212 19:55:28.070285   54219 start.go:96] Skipping create...Using existing machine configuration
	I1212 19:55:28.070289   54219 fix.go:54] fixHost starting: 
	I1212 19:55:28.070558   54219 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
	I1212 19:55:28.087483   54219 fix.go:112] recreateIfNeeded on functional-384006: state=Running err=<nil>
	W1212 19:55:28.087503   54219 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 19:55:28.090814   54219 out.go:252] * Updating the running docker "functional-384006" container ...
	I1212 19:55:28.090839   54219 machine.go:94] provisionDockerMachine start ...
	I1212 19:55:28.090929   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.108521   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.108845   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.108851   54219 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:55:28.259057   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.259071   54219 ubuntu.go:182] provisioning hostname "functional-384006"
	I1212 19:55:28.259129   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.275402   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.275704   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.275713   54219 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
	I1212 19:55:28.436755   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
	
	I1212 19:55:28.436820   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.461420   54219 main.go:143] libmachine: Using SSH client type: native
	I1212 19:55:28.461717   54219 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 19:55:28.461739   54219 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:55:28.612044   54219 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:55:28.612060   54219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 19:55:28.612075   54219 ubuntu.go:190] setting up certificates
	I1212 19:55:28.612092   54219 provision.go:84] configureAuth start
	I1212 19:55:28.612163   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:28.632765   54219 provision.go:143] copyHostCerts
	I1212 19:55:28.632832   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 19:55:28.632839   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 19:55:28.632906   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 19:55:28.633087   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 19:55:28.633091   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 19:55:28.633116   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 19:55:28.633174   54219 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 19:55:28.633178   54219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 19:55:28.633202   54219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 19:55:28.633253   54219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
	I1212 19:55:28.793482   54219 provision.go:177] copyRemoteCerts
	I1212 19:55:28.793529   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:55:28.793567   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.810312   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:28.915572   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:55:28.933605   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 19:55:28.951138   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 19:55:28.968522   54219 provision.go:87] duration metric: took 356.418282ms to configureAuth
	I1212 19:55:28.968541   54219 ubuntu.go:206] setting minikube options for container-runtime
	I1212 19:55:28.968740   54219 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 19:55:28.968745   54219 machine.go:97] duration metric: took 877.902402ms to provisionDockerMachine
	I1212 19:55:28.968752   54219 start.go:293] postStartSetup for "functional-384006" (driver="docker")
	I1212 19:55:28.968762   54219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:55:28.968808   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:55:28.968851   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:28.987014   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.092173   54219 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:55:29.095606   54219 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 19:55:29.095622   54219 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 19:55:29.095634   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 19:55:29.095686   54219 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 19:55:29.095770   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 19:55:29.095858   54219 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1212 19:55:29.095909   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1212 19:55:29.103304   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:29.119777   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1212 19:55:29.137094   54219 start.go:296] duration metric: took 168.327905ms for postStartSetup
	I1212 19:55:29.137179   54219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:55:29.137221   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.155438   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.256753   54219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 19:55:29.261489   54219 fix.go:56] duration metric: took 1.191194255s for fixHost
	I1212 19:55:29.261504   54219 start.go:83] releasing machines lock for "functional-384006", held for 1.19123098s
	I1212 19:55:29.261570   54219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
	I1212 19:55:29.278501   54219 ssh_runner.go:195] Run: cat /version.json
	I1212 19:55:29.278542   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.278786   54219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:55:29.278838   54219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
	I1212 19:55:29.300866   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.303322   54219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
	I1212 19:55:29.403647   54219 ssh_runner.go:195] Run: systemctl --version
	I1212 19:55:29.503423   54219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:55:29.507672   54219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:55:29.507733   54219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:55:29.515681   54219 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 19:55:29.515695   54219 start.go:496] detecting cgroup driver to use...
	I1212 19:55:29.515726   54219 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 19:55:29.515780   54219 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 19:55:29.531132   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 19:55:29.543869   54219 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:55:29.543922   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:55:29.559268   54219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:55:29.572058   54219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:55:29.685297   54219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:55:29.805225   54219 docker.go:234] disabling docker service ...
	I1212 19:55:29.805279   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:55:29.822098   54219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:55:29.834865   54219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:55:29.949324   54219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:55:30.087483   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:55:30.100955   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:55:30.116237   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 19:55:30.126127   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 19:55:30.136085   54219 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 19:55:30.136147   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 19:55:30.145914   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.154991   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 19:55:30.163972   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 19:55:30.172470   54219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:55:30.180930   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 19:55:30.190361   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 19:55:30.199337   54219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 19:55:30.208975   54219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:55:30.216623   54219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:55:30.223993   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.330122   54219 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 19:55:30.473295   54219 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 19:55:30.473369   54219 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 19:55:30.477639   54219 start.go:564] Will wait 60s for crictl version
	I1212 19:55:30.477693   54219 ssh_runner.go:195] Run: which crictl
	I1212 19:55:30.481548   54219 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 19:55:30.504633   54219 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 19:55:30.504687   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.523789   54219 ssh_runner.go:195] Run: containerd --version
	I1212 19:55:30.548955   54219 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 19:55:30.551786   54219 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 19:55:30.567944   54219 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 19:55:30.574767   54219 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 19:55:30.577669   54219 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:55:30.577791   54219 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:55:30.577868   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.602150   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.602162   54219 containerd.go:534] Images already preloaded, skipping extraction
	I1212 19:55:30.602217   54219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:55:30.625907   54219 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 19:55:30.625919   54219 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:55:30.625925   54219 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 19:55:30.626026   54219 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:55:30.626113   54219 ssh_runner.go:195] Run: sudo crictl info
	I1212 19:55:30.649188   54219 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 19:55:30.649208   54219 cni.go:84] Creating CNI manager for ""
	I1212 19:55:30.649216   54219 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:55:30.649224   54219 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:55:30.649244   54219 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:55:30.649349   54219 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-384006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:55:30.649412   54219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 19:55:30.656757   54219 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:55:30.656810   54219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:55:30.663814   54219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 19:55:30.675878   54219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 19:55:30.688262   54219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 19:55:30.703971   54219 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 19:55:30.708408   54219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:55:30.839166   54219 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:55:31.445221   54219 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
	I1212 19:55:31.445232   54219 certs.go:195] generating shared ca certs ...
	I1212 19:55:31.445248   54219 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:55:31.445419   54219 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 19:55:31.445478   54219 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 19:55:31.445485   54219 certs.go:257] generating profile certs ...
	I1212 19:55:31.445581   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
	I1212 19:55:31.445645   54219 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
	I1212 19:55:31.445694   54219 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
	I1212 19:55:31.445823   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 19:55:31.445865   54219 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 19:55:31.445873   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:55:31.445899   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:55:31.445931   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:55:31.445954   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 19:55:31.446005   54219 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 19:55:31.446654   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:55:31.468075   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 19:55:31.484808   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:55:31.501104   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 19:55:31.519018   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 19:55:31.536328   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:55:31.553581   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:55:31.570191   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 19:55:31.586954   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 19:55:31.603358   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:55:31.620509   54219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 19:55:31.637987   54219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:55:31.650484   54219 ssh_runner.go:195] Run: openssl version
	I1212 19:55:31.656450   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.663636   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 19:55:31.671141   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674842   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.674900   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 19:55:31.715596   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 19:55:31.723059   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.730233   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 19:55:31.737626   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741161   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.741213   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 19:55:31.783908   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 19:55:31.791542   54219 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.799333   54219 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:55:31.806999   54219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810570   54219 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.810630   54219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:55:31.851440   54219 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:55:31.858926   54219 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:55:31.862520   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 19:55:31.903666   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 19:55:31.944997   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 19:55:31.985858   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 19:55:32.026779   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 19:55:32.067925   54219 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 19:55:32.110481   54219 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:55:32.110555   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 19:55:32.110624   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.136703   54219 cri.go:89] found id: ""
	I1212 19:55:32.136771   54219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:55:32.144223   54219 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 19:55:32.144262   54219 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 19:55:32.144312   54219 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 19:55:32.151339   54219 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.151833   54219 kubeconfig.go:125] found "functional-384006" server: "https://192.168.49.2:8441"
	I1212 19:55:32.153024   54219 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 19:55:32.160890   54219 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 19:40:57.602349197 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 19:55:30.697011388 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 19:55:32.160901   54219 kubeadm.go:1161] stopping kube-system containers ...
	I1212 19:55:32.160919   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 19:55:32.160971   54219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:55:32.185826   54219 cri.go:89] found id: ""
	I1212 19:55:32.185884   54219 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 19:55:32.204086   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:55:32.212130   54219 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 12 19:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 19:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 12 19:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 19:45 /etc/kubernetes/scheduler.conf
	
	I1212 19:55:32.212191   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:55:32.219934   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:55:32.227897   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.227949   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:55:32.235243   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.242858   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.242920   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:55:32.250701   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:55:32.258298   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:55:32.258372   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:55:32.265710   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:55:32.273454   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:32.324121   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:33.892385   54219 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.568235814s)
	I1212 19:55:33.892459   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.100445   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.171354   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 19:55:34.217083   54219 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:55:34.217158   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:34.717278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.217351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:35.717787   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.217788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:36.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.218074   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:37.717373   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.218212   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:38.717990   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.217746   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:39.717717   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.217500   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:40.718081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.217959   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:41.717497   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.218218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:42.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.217997   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:43.717351   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.217978   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:44.717885   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.217387   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:45.718121   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.217288   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:46.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.217318   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:47.717728   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.218067   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:48.717326   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.217512   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:49.717353   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:50.717983   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.217333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:51.717999   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.217773   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:52.717402   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.217334   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:53.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.218070   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:54.717712   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.217290   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:55.718107   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.217424   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:56.717836   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.217448   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:57.718053   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.217955   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:58.717942   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.218252   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:55:59.717973   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.218214   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:00.718129   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.217818   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:01.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.218222   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:02.717312   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.217601   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:03.717316   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.217287   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:04.718088   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.217741   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:05.717294   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.218217   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:06.717867   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:07.717349   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.217366   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:08.717546   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.218108   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:09.717381   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.217293   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:10.717333   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.217921   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:11.717764   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.217784   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:12.718179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.218229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:13.717368   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.217920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:14.717247   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.218046   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:15.717383   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.218006   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:16.718040   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.217291   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:17.717910   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.218203   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:18.717788   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.217278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:19.718149   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.217534   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:20.717322   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.218045   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:21.717355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.218081   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:22.717268   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.218208   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:23.717289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.217232   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:24.717930   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.218161   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:25.718192   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.217327   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:26.717452   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.218230   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:27.717354   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.217306   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:28.717853   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.218101   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:29.717649   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.218027   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:30.718035   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.217283   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:31.717340   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.218050   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:32.717819   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.217245   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:33.717370   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:34.217941   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:34.218012   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:34.255372   54219 cri.go:89] found id: ""
	I1212 19:56:34.255386   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.255399   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:34.255404   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:34.255464   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:34.281284   54219 cri.go:89] found id: ""
	I1212 19:56:34.281297   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.281303   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:34.281308   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:34.281363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:34.304259   54219 cri.go:89] found id: ""
	I1212 19:56:34.304273   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.304279   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:34.304284   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:34.304338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:34.327600   54219 cri.go:89] found id: ""
	I1212 19:56:34.327613   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.327620   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:34.327625   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:34.327678   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:34.352303   54219 cri.go:89] found id: ""
	I1212 19:56:34.352317   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.352323   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:34.352328   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:34.352385   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:34.375938   54219 cri.go:89] found id: ""
	I1212 19:56:34.375951   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.375958   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:34.375963   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:34.376019   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:34.399635   54219 cri.go:89] found id: ""
	I1212 19:56:34.399648   54219 logs.go:282] 0 containers: []
	W1212 19:56:34.399655   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:34.399663   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:34.399675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:34.457482   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:34.457501   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:34.467864   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:34.467879   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:34.532394   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:34.523991   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.524531   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526241   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.526742   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:34.528425   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:34.532405   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:34.532415   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:34.595426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:34.595444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:37.126278   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:37.136103   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:37.136162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:37.160403   54219 cri.go:89] found id: ""
	I1212 19:56:37.160416   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.160422   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:37.160428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:37.160483   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:37.184487   54219 cri.go:89] found id: ""
	I1212 19:56:37.184500   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.184507   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:37.184512   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:37.184582   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:37.226352   54219 cri.go:89] found id: ""
	I1212 19:56:37.226366   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.226373   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:37.226378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:37.226435   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:37.258223   54219 cri.go:89] found id: ""
	I1212 19:56:37.258267   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.258274   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:37.258280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:37.258349   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:37.285540   54219 cri.go:89] found id: ""
	I1212 19:56:37.285554   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.285561   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:37.285566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:37.285622   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:37.309113   54219 cri.go:89] found id: ""
	I1212 19:56:37.309126   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.309132   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:37.309147   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:37.309226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:37.332041   54219 cri.go:89] found id: ""
	I1212 19:56:37.332054   54219 logs.go:282] 0 containers: []
	W1212 19:56:37.332061   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:37.332069   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:37.332079   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:37.387421   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:37.387440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:37.397657   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:37.397672   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:37.461255   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:37.453122   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.453687   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455442   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.455987   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:37.457488   10817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:37.461265   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:37.461275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:37.523429   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:37.523446   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:40.054218   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:40.066551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:40.066620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:40.099245   54219 cri.go:89] found id: ""
	I1212 19:56:40.099260   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.099267   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:40.099273   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:40.099336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:40.127637   54219 cri.go:89] found id: ""
	I1212 19:56:40.127653   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.127660   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:40.127666   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:40.127728   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:40.154877   54219 cri.go:89] found id: ""
	I1212 19:56:40.154892   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.154899   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:40.154904   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:40.154966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:40.186457   54219 cri.go:89] found id: ""
	I1212 19:56:40.186471   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.186478   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:40.186483   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:40.186540   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:40.223505   54219 cri.go:89] found id: ""
	I1212 19:56:40.223520   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.223527   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:40.223532   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:40.223589   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:40.264967   54219 cri.go:89] found id: ""
	I1212 19:56:40.264981   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.264987   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:40.264992   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:40.265064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:40.288851   54219 cri.go:89] found id: ""
	I1212 19:56:40.288865   54219 logs.go:282] 0 containers: []
	W1212 19:56:40.288871   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:40.288879   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:40.288889   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:40.345104   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:40.345122   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:40.355393   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:40.355408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:40.421074   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:40.412933   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.413606   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415194   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.415715   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:40.417273   10917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:40.421086   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:40.421100   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:40.484292   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:40.484310   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:43.012558   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:43.022764   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:43.022820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:43.046602   54219 cri.go:89] found id: ""
	I1212 19:56:43.046617   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.046623   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:43.046628   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:43.046688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:43.070683   54219 cri.go:89] found id: ""
	I1212 19:56:43.070697   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.070703   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:43.070715   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:43.070769   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:43.094890   54219 cri.go:89] found id: ""
	I1212 19:56:43.094904   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.094911   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:43.094915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:43.094971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:43.123965   54219 cri.go:89] found id: ""
	I1212 19:56:43.123978   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.123984   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:43.123989   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:43.124043   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:43.149003   54219 cri.go:89] found id: ""
	I1212 19:56:43.149017   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.149024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:43.149028   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:43.149084   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:43.177565   54219 cri.go:89] found id: ""
	I1212 19:56:43.177578   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.177584   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:43.177589   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:43.177654   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:43.203765   54219 cri.go:89] found id: ""
	I1212 19:56:43.203779   54219 logs.go:282] 0 containers: []
	W1212 19:56:43.203785   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:43.203793   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:43.203803   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:43.267789   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:43.267807   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:43.278476   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:43.278493   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:43.342414   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:43.333163   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.333997   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.335535   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.336094   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:43.337887   11019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:43.342426   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:43.342436   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:43.406378   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:43.406398   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:45.939180   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:45.950923   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:45.950984   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:45.980081   54219 cri.go:89] found id: ""
	I1212 19:56:45.980095   54219 logs.go:282] 0 containers: []
	W1212 19:56:45.980102   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:45.980106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:45.980162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:46.008401   54219 cri.go:89] found id: ""
	I1212 19:56:46.008417   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.008425   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:46.008431   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:46.008500   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:46.037350   54219 cri.go:89] found id: ""
	I1212 19:56:46.037364   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.037382   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:46.037388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:46.037447   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:46.062477   54219 cri.go:89] found id: ""
	I1212 19:56:46.062491   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.062498   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:46.062503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:46.062562   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:46.088314   54219 cri.go:89] found id: ""
	I1212 19:56:46.088328   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.088335   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:46.088340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:46.088397   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:46.118483   54219 cri.go:89] found id: ""
	I1212 19:56:46.118496   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.118503   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:46.118513   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:46.118574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:46.142723   54219 cri.go:89] found id: ""
	I1212 19:56:46.142737   54219 logs.go:282] 0 containers: []
	W1212 19:56:46.142744   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:46.142752   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:46.142773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:46.213691   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:46.204216   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.204961   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.206958   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.207684   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:46.209470   11112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:46.213700   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:46.213710   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:46.286149   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:46.286168   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:46.313728   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:46.313743   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:46.372694   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:46.372711   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:48.883344   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:48.893476   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:48.893532   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:48.917365   54219 cri.go:89] found id: ""
	I1212 19:56:48.917379   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.917386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:48.917391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:48.917446   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:48.941342   54219 cri.go:89] found id: ""
	I1212 19:56:48.941356   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.941363   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:48.941367   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:48.941428   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:48.966988   54219 cri.go:89] found id: ""
	I1212 19:56:48.967001   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.967008   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:48.967013   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:48.967070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:48.990387   54219 cri.go:89] found id: ""
	I1212 19:56:48.990400   54219 logs.go:282] 0 containers: []
	W1212 19:56:48.990407   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:48.990412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:48.990474   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:49.016237   54219 cri.go:89] found id: ""
	I1212 19:56:49.016251   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.016257   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:49.016263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:49.016334   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:49.040263   54219 cri.go:89] found id: ""
	I1212 19:56:49.040276   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.040283   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:49.040289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:49.040346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:49.064604   54219 cri.go:89] found id: ""
	I1212 19:56:49.064618   54219 logs.go:282] 0 containers: []
	W1212 19:56:49.064625   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:49.064633   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:49.064643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:49.122132   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:49.122150   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:49.132901   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:49.132916   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:49.203010   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:49.192320   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.192966   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.194927   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.195674   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:49.197449   11222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:49.203028   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:49.203038   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:49.277223   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:49.277242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:51.807432   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:51.817646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:51.817706   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:51.843424   54219 cri.go:89] found id: ""
	I1212 19:56:51.843438   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.843444   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:51.843449   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:51.843510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:51.868210   54219 cri.go:89] found id: ""
	I1212 19:56:51.868223   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.868230   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:51.868235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:51.868290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:51.892493   54219 cri.go:89] found id: ""
	I1212 19:56:51.892506   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.892513   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:51.892518   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:51.892577   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:51.917111   54219 cri.go:89] found id: ""
	I1212 19:56:51.917124   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.917143   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:51.917148   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:51.917203   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:51.945367   54219 cri.go:89] found id: ""
	I1212 19:56:51.945381   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.945387   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:51.945392   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:51.945449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:51.970026   54219 cri.go:89] found id: ""
	I1212 19:56:51.970040   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.970047   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:51.970053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:51.970108   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:51.994534   54219 cri.go:89] found id: ""
	I1212 19:56:51.994547   54219 logs.go:282] 0 containers: []
	W1212 19:56:51.994553   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:51.994563   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:51.994573   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:52.028818   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:52.028848   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:52.090429   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:52.090450   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:52.101879   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:52.101895   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:52.171776   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:52.163507   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.164111   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.165920   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.166464   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:52.168011   11341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:52.171787   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:52.171800   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:54.740626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:54.750925   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:54.750995   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:54.780366   54219 cri.go:89] found id: ""
	I1212 19:56:54.780379   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.780386   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:54.780391   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:54.780449   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:54.804094   54219 cri.go:89] found id: ""
	I1212 19:56:54.804107   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.804113   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:54.804118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:54.804173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:54.828262   54219 cri.go:89] found id: ""
	I1212 19:56:54.828276   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.828283   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:54.828288   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:54.828346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:54.851328   54219 cri.go:89] found id: ""
	I1212 19:56:54.851340   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.851347   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:54.851352   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:54.851406   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:54.874948   54219 cri.go:89] found id: ""
	I1212 19:56:54.874971   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.874978   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:54.874983   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:54.875049   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:54.899059   54219 cri.go:89] found id: ""
	I1212 19:56:54.899072   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.899079   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:54.899085   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:54.899139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:54.922912   54219 cri.go:89] found id: ""
	I1212 19:56:54.922944   54219 logs.go:282] 0 containers: []
	W1212 19:56:54.922952   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:54.922959   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:54.922969   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:54.982944   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:54.982963   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:54.993620   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:54.993643   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:55.063883   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:55.055908   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.056618   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058221   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.058538   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:55.060159   11437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:55.063895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:55.063905   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:55.126641   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:55.126661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:56:57.654341   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:56:57.664332   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:56:57.664398   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:56:57.690295   54219 cri.go:89] found id: ""
	I1212 19:56:57.690312   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.690319   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:56:57.690324   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:56:57.690378   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:56:57.715389   54219 cri.go:89] found id: ""
	I1212 19:56:57.715403   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.715409   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:56:57.715414   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:56:57.715485   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:56:57.741214   54219 cri.go:89] found id: ""
	I1212 19:56:57.741228   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.741234   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:56:57.741239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:56:57.741302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:56:57.766791   54219 cri.go:89] found id: ""
	I1212 19:56:57.766804   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.766811   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:56:57.766817   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:56:57.766876   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:56:57.791413   54219 cri.go:89] found id: ""
	I1212 19:56:57.791427   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.791434   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:56:57.791439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:56:57.791494   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:56:57.815197   54219 cri.go:89] found id: ""
	I1212 19:56:57.815211   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.815218   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:56:57.815223   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:56:57.815291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:56:57.839238   54219 cri.go:89] found id: ""
	I1212 19:56:57.839251   54219 logs.go:282] 0 containers: []
	W1212 19:56:57.839258   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:56:57.839265   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:56:57.839275   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:56:57.895387   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:56:57.895408   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:56:57.906723   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:56:57.906738   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:56:57.970462   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:56:57.962358   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.962925   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964418   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.964860   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:56:57.966350   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:56:57.970473   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:56:57.970483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:56:58.035426   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:56:58.035459   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.567794   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:00.577750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:00.577811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:00.601472   54219 cri.go:89] found id: ""
	I1212 19:57:00.601485   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.601492   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:00.601497   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:00.601552   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:00.624990   54219 cri.go:89] found id: ""
	I1212 19:57:00.625003   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.625009   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:00.625014   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:00.625069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:00.652831   54219 cri.go:89] found id: ""
	I1212 19:57:00.652845   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.652852   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:00.652857   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:00.652913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:00.676463   54219 cri.go:89] found id: ""
	I1212 19:57:00.676477   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.676484   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:00.676489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:00.676544   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:00.700820   54219 cri.go:89] found id: ""
	I1212 19:57:00.700833   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.700840   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:00.700845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:00.700904   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:00.728048   54219 cri.go:89] found id: ""
	I1212 19:57:00.728061   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.728068   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:00.728073   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:00.728129   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:00.754114   54219 cri.go:89] found id: ""
	I1212 19:57:00.754127   54219 logs.go:282] 0 containers: []
	W1212 19:57:00.754134   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:00.754142   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:00.754152   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:00.783733   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:00.783749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:00.842004   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:00.842021   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:00.852440   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:00.852455   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:00.914781   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:00.906826   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.907342   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.908876   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.909350   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:00.910854   11653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:00.914792   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:00.914802   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.477311   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:03.488847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:03.488902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:03.517173   54219 cri.go:89] found id: ""
	I1212 19:57:03.517186   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.517194   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:03.517198   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:03.517266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:03.545723   54219 cri.go:89] found id: ""
	I1212 19:57:03.545737   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.545750   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:03.545755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:03.545812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:03.572600   54219 cri.go:89] found id: ""
	I1212 19:57:03.572614   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.572622   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:03.572626   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:03.572688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:03.597001   54219 cri.go:89] found id: ""
	I1212 19:57:03.597015   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.597026   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:03.597031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:03.597088   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:03.625021   54219 cri.go:89] found id: ""
	I1212 19:57:03.625034   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.625041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:03.625046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:03.625104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:03.653842   54219 cri.go:89] found id: ""
	I1212 19:57:03.653856   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.653864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:03.653869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:03.653926   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:03.677783   54219 cri.go:89] found id: ""
	I1212 19:57:03.677797   54219 logs.go:282] 0 containers: []
	W1212 19:57:03.677804   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:03.677812   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:03.677822   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:03.736594   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:03.736617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:03.747247   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:03.747264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:03.809956   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:03.801703   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.802457   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804050   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.804612   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:03.806253   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:03.809965   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:03.809987   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:03.871011   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:03.871029   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.399328   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:06.409365   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:06.409423   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:06.433061   54219 cri.go:89] found id: ""
	I1212 19:57:06.433075   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.433082   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:06.433094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:06.433154   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:06.481872   54219 cri.go:89] found id: ""
	I1212 19:57:06.481886   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.481893   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:06.481898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:06.481954   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:06.510179   54219 cri.go:89] found id: ""
	I1212 19:57:06.510192   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.510200   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:06.510204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:06.510264   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:06.543022   54219 cri.go:89] found id: ""
	I1212 19:57:06.543036   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.543043   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:06.543048   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:06.543104   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:06.570071   54219 cri.go:89] found id: ""
	I1212 19:57:06.570091   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.570100   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:06.570105   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:06.570170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:06.599741   54219 cri.go:89] found id: ""
	I1212 19:57:06.599754   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.599761   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:06.599779   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:06.599858   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:06.624514   54219 cri.go:89] found id: ""
	I1212 19:57:06.624528   54219 logs.go:282] 0 containers: []
	W1212 19:57:06.624534   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:06.624542   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:06.624553   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:06.635592   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:06.635610   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:06.702713   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:06.694419   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.694856   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.696741   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.697131   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:06.698788   11848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:06.702724   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:06.702734   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:06.765240   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:06.765258   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:06.793023   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:06.793039   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:09.351721   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:09.361738   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:09.361798   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:09.386854   54219 cri.go:89] found id: ""
	I1212 19:57:09.386867   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.386875   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:09.386880   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:09.386944   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:09.412114   54219 cri.go:89] found id: ""
	I1212 19:57:09.412127   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.412134   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:09.412139   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:09.412197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:09.449831   54219 cri.go:89] found id: ""
	I1212 19:57:09.449844   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.449854   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:09.449859   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:09.449913   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:09.478096   54219 cri.go:89] found id: ""
	I1212 19:57:09.478109   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.478127   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:09.478133   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:09.478205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:09.509051   54219 cri.go:89] found id: ""
	I1212 19:57:09.509064   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.509072   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:09.509077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:09.509140   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:09.533239   54219 cri.go:89] found id: ""
	I1212 19:57:09.533253   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.533259   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:09.533265   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:09.533320   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:09.559093   54219 cri.go:89] found id: ""
	I1212 19:57:09.559108   54219 logs.go:282] 0 containers: []
	W1212 19:57:09.559114   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:09.559122   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:09.559144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:09.569994   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:09.570010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:09.632936   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:09.623962   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.624715   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.626476   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.627047   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:09.628827   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:09.632947   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:09.632957   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:09.694797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:09.694815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:09.723095   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:09.723124   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.279206   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:12.289157   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:12.289218   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:12.314051   54219 cri.go:89] found id: ""
	I1212 19:57:12.314065   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.314071   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:12.314077   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:12.314146   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:12.338981   54219 cri.go:89] found id: ""
	I1212 19:57:12.338995   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.339002   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:12.339007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:12.339064   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:12.364272   54219 cri.go:89] found id: ""
	I1212 19:57:12.364285   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.364294   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:12.364299   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:12.364356   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:12.388633   54219 cri.go:89] found id: ""
	I1212 19:57:12.388647   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.388654   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:12.388659   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:12.388717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:12.412315   54219 cri.go:89] found id: ""
	I1212 19:57:12.412330   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.412337   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:12.412342   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:12.412399   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:12.435919   54219 cri.go:89] found id: ""
	I1212 19:57:12.435932   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.435938   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:12.435944   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:12.436010   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:12.464586   54219 cri.go:89] found id: ""
	I1212 19:57:12.464600   54219 logs.go:282] 0 containers: []
	W1212 19:57:12.464607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:12.464615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:12.464625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:12.531126   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:12.531144   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:12.541720   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:12.541737   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:12.607440   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:12.598720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.599599   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.601461   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.602112   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:12.603704   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:12.607450   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:12.607460   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:12.669638   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:12.669657   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.197082   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:15.207136   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:15.207197   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:15.232075   54219 cri.go:89] found id: ""
	I1212 19:57:15.232089   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.232095   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:15.232101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:15.232159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:15.256640   54219 cri.go:89] found id: ""
	I1212 19:57:15.256654   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.256661   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:15.256668   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:15.256725   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:15.281708   54219 cri.go:89] found id: ""
	I1212 19:57:15.281722   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.281729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:15.281751   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:15.281811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:15.306602   54219 cri.go:89] found id: ""
	I1212 19:57:15.306615   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.306622   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:15.306627   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:15.306683   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:15.330704   54219 cri.go:89] found id: ""
	I1212 19:57:15.330718   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.330724   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:15.330730   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:15.330788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:15.356237   54219 cri.go:89] found id: ""
	I1212 19:57:15.356251   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.356258   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:15.356263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:15.356322   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:15.384137   54219 cri.go:89] found id: ""
	I1212 19:57:15.384149   54219 logs.go:282] 0 containers: []
	W1212 19:57:15.384155   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:15.384163   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:15.384174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:15.394815   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:15.394831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:15.464384   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:15.455162   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.455895   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.457601   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.458207   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:15.459801   12159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:15.464402   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:15.464413   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:15.531093   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:15.531112   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:15.558272   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:15.558287   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.114881   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:18.124888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:18.124947   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:18.153733   54219 cri.go:89] found id: ""
	I1212 19:57:18.153747   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.153753   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:18.153758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:18.153819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:18.179987   54219 cri.go:89] found id: ""
	I1212 19:57:18.180001   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.180007   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:18.180012   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:18.180069   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:18.208210   54219 cri.go:89] found id: ""
	I1212 19:57:18.208223   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.208230   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:18.208235   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:18.208290   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:18.240237   54219 cri.go:89] found id: ""
	I1212 19:57:18.240252   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.240258   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:18.240263   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:18.240321   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:18.263335   54219 cri.go:89] found id: ""
	I1212 19:57:18.263349   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.263356   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:18.263361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:18.263416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:18.286920   54219 cri.go:89] found id: ""
	I1212 19:57:18.286933   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.286940   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:18.286945   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:18.286999   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:18.311040   54219 cri.go:89] found id: ""
	I1212 19:57:18.311053   54219 logs.go:282] 0 containers: []
	W1212 19:57:18.311060   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:18.311068   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:18.311077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:18.366520   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:18.366538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:18.376885   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:18.376903   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:18.439989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:18.432083   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.432645   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434309   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.434875   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:18.436419   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:18.440010   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:18.440020   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:18.511364   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:18.511384   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:21.043380   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:21.053290   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:21.053345   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:21.077334   54219 cri.go:89] found id: ""
	I1212 19:57:21.077348   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.077355   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:21.077360   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:21.077424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:21.102108   54219 cri.go:89] found id: ""
	I1212 19:57:21.102122   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.102129   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:21.102141   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:21.102198   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:21.125941   54219 cri.go:89] found id: ""
	I1212 19:57:21.125955   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.125962   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:21.125967   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:21.126022   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:21.150198   54219 cri.go:89] found id: ""
	I1212 19:57:21.150211   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.150218   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:21.150229   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:21.150284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:21.177722   54219 cri.go:89] found id: ""
	I1212 19:57:21.177736   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.177743   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:21.177748   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:21.177806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:21.205490   54219 cri.go:89] found id: ""
	I1212 19:57:21.205504   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.205511   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:21.205516   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:21.205574   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:21.230104   54219 cri.go:89] found id: ""
	I1212 19:57:21.230118   54219 logs.go:282] 0 containers: []
	W1212 19:57:21.230125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:21.230132   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:21.230148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:21.286638   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:21.286655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:21.297043   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:21.297058   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:21.358837   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:21.350431   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.351064   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.352763   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.353316   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:21.354959   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:21.358847   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:21.358858   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:21.425656   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:21.425676   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:23.965162   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:23.974936   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:23.975001   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:23.998921   54219 cri.go:89] found id: ""
	I1212 19:57:23.998935   54219 logs.go:282] 0 containers: []
	W1212 19:57:23.998942   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:23.998947   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:23.999007   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:24.028254   54219 cri.go:89] found id: ""
	I1212 19:57:24.028283   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.028291   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:24.028296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:24.028365   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:24.053461   54219 cri.go:89] found id: ""
	I1212 19:57:24.053475   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.053482   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:24.053487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:24.053546   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:24.082160   54219 cri.go:89] found id: ""
	I1212 19:57:24.082175   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.082182   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:24.082187   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:24.082247   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:24.111368   54219 cri.go:89] found id: ""
	I1212 19:57:24.111381   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.111388   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:24.111394   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:24.111452   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:24.139886   54219 cri.go:89] found id: ""
	I1212 19:57:24.139900   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.139907   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:24.139912   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:24.139966   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:24.165622   54219 cri.go:89] found id: ""
	I1212 19:57:24.165636   54219 logs.go:282] 0 containers: []
	W1212 19:57:24.165644   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:24.165652   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:24.165661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:24.223024   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:24.223042   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:24.234034   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:24.234049   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:24.300286   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:24.292018   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.292708   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294225   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.294703   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:24.296238   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:24.300298   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:24.300308   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:24.366297   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:24.366324   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:26.892882   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:26.903710   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:26.903767   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:26.928734   54219 cri.go:89] found id: ""
	I1212 19:57:26.928748   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.928754   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:26.928759   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:26.928815   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:26.951741   54219 cri.go:89] found id: ""
	I1212 19:57:26.951754   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.951760   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:26.951765   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:26.951820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:26.977319   54219 cri.go:89] found id: ""
	I1212 19:57:26.977332   54219 logs.go:282] 0 containers: []
	W1212 19:57:26.977339   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:26.977343   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:26.977396   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:27.005917   54219 cri.go:89] found id: ""
	I1212 19:57:27.005931   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.005937   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:27.005942   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:27.005997   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:27.031546   54219 cri.go:89] found id: ""
	I1212 19:57:27.031561   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.031568   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:27.031573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:27.031630   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:27.055510   54219 cri.go:89] found id: ""
	I1212 19:57:27.055524   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.055530   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:27.055535   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:27.055593   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:27.083350   54219 cri.go:89] found id: ""
	I1212 19:57:27.083364   54219 logs.go:282] 0 containers: []
	W1212 19:57:27.083370   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:27.083389   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:27.083400   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:27.111521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:27.111542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:27.166541   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:27.166558   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:27.177159   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:27.177174   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:27.242522   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:27.234517   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.235260   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.236963   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.237352   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:27.238783   12591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:27.242532   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:27.242542   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:29.804626   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:29.814577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:29.814643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:29.840378   54219 cri.go:89] found id: ""
	I1212 19:57:29.840391   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.840398   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:29.840403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:29.840462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:29.868144   54219 cri.go:89] found id: ""
	I1212 19:57:29.868157   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.868163   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:29.868168   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:29.868227   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:29.893720   54219 cri.go:89] found id: ""
	I1212 19:57:29.893734   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.893740   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:29.893745   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:29.893812   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:29.922305   54219 cri.go:89] found id: ""
	I1212 19:57:29.922319   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.922326   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:29.922331   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:29.922386   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:29.946347   54219 cri.go:89] found id: ""
	I1212 19:57:29.946366   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.946373   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:29.946378   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:29.946434   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:29.971074   54219 cri.go:89] found id: ""
	I1212 19:57:29.971087   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.971094   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:29.971099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:29.971158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:29.994674   54219 cri.go:89] found id: ""
	I1212 19:57:29.994697   54219 logs.go:282] 0 containers: []
	W1212 19:57:29.994704   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:29.994712   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:29.994723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:30.005086   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:30.005108   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:30.083562   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:30.074527   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.075529   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077335   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.077677   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:30.079272   12682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:30.083572   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:30.083582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:30.146070   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:30.146089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:30.178521   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:30.178538   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:32.735968   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:32.746704   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:32.746766   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:32.773559   54219 cri.go:89] found id: ""
	I1212 19:57:32.773573   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.773579   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:32.773584   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:32.773647   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:32.796720   54219 cri.go:89] found id: ""
	I1212 19:57:32.796733   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.796749   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:32.796755   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:32.796809   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:32.819740   54219 cri.go:89] found id: ""
	I1212 19:57:32.819754   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.819761   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:32.819766   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:32.819824   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:32.845383   54219 cri.go:89] found id: ""
	I1212 19:57:32.845396   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.845404   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:32.845409   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:32.845463   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:32.868404   54219 cri.go:89] found id: ""
	I1212 19:57:32.868417   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.868423   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:32.868428   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:32.868482   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:32.893264   54219 cri.go:89] found id: ""
	I1212 19:57:32.893278   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.893284   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:32.893289   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:32.893342   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:32.918080   54219 cri.go:89] found id: ""
	I1212 19:57:32.918103   54219 logs.go:282] 0 containers: []
	W1212 19:57:32.918111   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:32.918124   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:32.918134   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:32.983660   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:32.976099   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.976670   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978233   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.978797   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:32.979854   12782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:32.983671   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:32.983682   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:33.050130   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:33.050155   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:33.077660   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:33.077675   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:33.136010   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:33.136028   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.647123   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:35.656832   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:35.656887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:35.680780   54219 cri.go:89] found id: ""
	I1212 19:57:35.680793   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.680800   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:35.680805   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:35.680863   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:35.710149   54219 cri.go:89] found id: ""
	I1212 19:57:35.710163   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.710171   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:35.710175   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:35.710233   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:35.737709   54219 cri.go:89] found id: ""
	I1212 19:57:35.737722   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.737729   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:35.737734   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:35.737788   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:35.763960   54219 cri.go:89] found id: ""
	I1212 19:57:35.763974   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.763986   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:35.763991   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:35.764053   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:35.796697   54219 cri.go:89] found id: ""
	I1212 19:57:35.796710   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.796718   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:35.796722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:35.796782   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:35.820208   54219 cri.go:89] found id: ""
	I1212 19:57:35.820222   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.820229   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:35.820234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:35.820289   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:35.845107   54219 cri.go:89] found id: ""
	I1212 19:57:35.845121   54219 logs.go:282] 0 containers: []
	W1212 19:57:35.845128   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:35.845135   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:35.845148   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:35.904798   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:35.904816   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:35.915282   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:35.915297   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:35.980125   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:35.972354   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.972745   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974261   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.974577   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:35.976219   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:35.980135   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:35.980146   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:36.042456   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:36.042476   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:38.571541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:38.581597   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:38.581658   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:38.604774   54219 cri.go:89] found id: ""
	I1212 19:57:38.604787   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.604794   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:38.604799   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:38.604853   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:38.630065   54219 cri.go:89] found id: ""
	I1212 19:57:38.630079   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.630085   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:38.630090   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:38.630151   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:38.654890   54219 cri.go:89] found id: ""
	I1212 19:57:38.654903   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.654910   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:38.654915   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:38.654970   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:38.682669   54219 cri.go:89] found id: ""
	I1212 19:57:38.682684   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.682691   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:38.682696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:38.682753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:38.728209   54219 cri.go:89] found id: ""
	I1212 19:57:38.728227   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.728244   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:38.728249   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:38.728317   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:38.757740   54219 cri.go:89] found id: ""
	I1212 19:57:38.757753   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.757768   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:38.757774   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:38.757829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:38.785300   54219 cri.go:89] found id: ""
	I1212 19:57:38.785314   54219 logs.go:282] 0 containers: []
	W1212 19:57:38.785321   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:38.785328   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:38.785338   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:38.841797   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:38.841815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:38.852807   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:38.852823   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:38.918575   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:38.909773   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.910996   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.911473   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.912932   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:38.913369   12994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:38.918585   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:38.918596   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:38.980647   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:38.980666   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:41.508125   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:41.518560   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:41.518620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:41.543483   54219 cri.go:89] found id: ""
	I1212 19:57:41.543497   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.543504   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:41.543509   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:41.543565   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:41.568460   54219 cri.go:89] found id: ""
	I1212 19:57:41.568474   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.568481   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:41.568485   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:41.568541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:41.592454   54219 cri.go:89] found id: ""
	I1212 19:57:41.592468   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.592475   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:41.592480   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:41.592537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:41.616514   54219 cri.go:89] found id: ""
	I1212 19:57:41.616528   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.616535   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:41.616540   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:41.616600   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:41.640661   54219 cri.go:89] found id: ""
	I1212 19:57:41.640675   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.640681   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:41.640686   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:41.640741   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:41.668228   54219 cri.go:89] found id: ""
	I1212 19:57:41.668241   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.668248   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:41.668254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:41.668315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:41.694010   54219 cri.go:89] found id: ""
	I1212 19:57:41.694023   54219 logs.go:282] 0 containers: []
	W1212 19:57:41.694030   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:41.694048   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:41.694057   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:41.759133   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:41.759153   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:41.770184   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:41.770200   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:41.834777   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:41.826216   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.826727   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828548   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.828893   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:41.830348   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:41.834788   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:41.834798   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:41.896691   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:41.896709   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:44.424748   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:44.434763   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:44.434819   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:44.458808   54219 cri.go:89] found id: ""
	I1212 19:57:44.458821   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.458833   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:44.458839   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:44.458895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:44.484932   54219 cri.go:89] found id: ""
	I1212 19:57:44.484945   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.484951   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:44.484956   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:44.485013   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:44.509964   54219 cri.go:89] found id: ""
	I1212 19:57:44.509978   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.509985   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:44.509990   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:44.510047   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:44.538212   54219 cri.go:89] found id: ""
	I1212 19:57:44.538226   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.538233   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:44.538239   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:44.538295   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:44.563029   54219 cri.go:89] found id: ""
	I1212 19:57:44.563043   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.563050   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:44.563058   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:44.563116   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:44.594560   54219 cri.go:89] found id: ""
	I1212 19:57:44.594573   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.594580   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:44.594585   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:44.594648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:44.618882   54219 cri.go:89] found id: ""
	I1212 19:57:44.618896   54219 logs.go:282] 0 containers: []
	W1212 19:57:44.618903   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:44.618910   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:44.618921   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:44.674635   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:44.674653   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:44.685377   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:44.685392   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:44.767577   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:44.758871   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.759548   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761205   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.761708   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:44.763309   13201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:44.767587   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:44.767599   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:44.830883   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:44.830901   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:47.361584   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:47.371608   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:47.371664   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:47.397902   54219 cri.go:89] found id: ""
	I1212 19:57:47.397915   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.397922   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:47.397927   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:47.397983   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:47.421839   54219 cri.go:89] found id: ""
	I1212 19:57:47.421852   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.421859   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:47.421864   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:47.421920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:47.444814   54219 cri.go:89] found id: ""
	I1212 19:57:47.444829   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.444836   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:47.444841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:47.444895   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:47.470743   54219 cri.go:89] found id: ""
	I1212 19:57:47.470758   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.470765   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:47.470770   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:47.470829   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:47.494189   54219 cri.go:89] found id: ""
	I1212 19:57:47.494202   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.494209   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:47.494214   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:47.494271   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:47.522490   54219 cri.go:89] found id: ""
	I1212 19:57:47.522504   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.522510   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:47.522515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:47.522573   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:47.546914   54219 cri.go:89] found id: ""
	I1212 19:57:47.546929   54219 logs.go:282] 0 containers: []
	W1212 19:57:47.546938   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:47.546948   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:47.546960   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:47.602569   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:47.602586   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:47.613063   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:47.613077   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:47.675404   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:47.667395   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.668258   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.669918   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.670233   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:47.671713   13306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:47.675413   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:47.675424   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:47.744526   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:47.744545   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:50.275957   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:50.285985   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:50.286042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:50.310834   54219 cri.go:89] found id: ""
	I1212 19:57:50.310848   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.310855   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:50.310860   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:50.310915   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:50.335949   54219 cri.go:89] found id: ""
	I1212 19:57:50.335962   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.335969   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:50.335973   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:50.336042   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:50.361218   54219 cri.go:89] found id: ""
	I1212 19:57:50.361233   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.361239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:50.361244   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:50.361302   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:50.389990   54219 cri.go:89] found id: ""
	I1212 19:57:50.390004   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.390011   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:50.390016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:50.390070   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:50.414872   54219 cri.go:89] found id: ""
	I1212 19:57:50.414886   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.414893   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:50.414898   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:50.414957   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:50.439081   54219 cri.go:89] found id: ""
	I1212 19:57:50.439094   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.439102   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:50.439106   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:50.439162   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:50.463124   54219 cri.go:89] found id: ""
	I1212 19:57:50.463137   54219 logs.go:282] 0 containers: []
	W1212 19:57:50.463144   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:50.463151   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:50.463160   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:50.519197   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:50.519217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:50.529678   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:50.529697   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:50.593926   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:50.585789   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.586582   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588344   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.588667   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:50.589987   13409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:50.593936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:50.593946   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:50.663627   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:50.663647   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.195155   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:53.205007   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:53.205065   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:53.228910   54219 cri.go:89] found id: ""
	I1212 19:57:53.228924   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.228930   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:53.228935   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:53.228992   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:53.256269   54219 cri.go:89] found id: ""
	I1212 19:57:53.256282   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.256289   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:53.256294   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:53.256363   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:53.279490   54219 cri.go:89] found id: ""
	I1212 19:57:53.279505   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.279512   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:53.279517   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:53.279575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:53.303201   54219 cri.go:89] found id: ""
	I1212 19:57:53.303215   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.303222   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:53.303227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:53.303285   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:53.331320   54219 cri.go:89] found id: ""
	I1212 19:57:53.331333   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.331349   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:53.331354   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:53.331424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:53.355603   54219 cri.go:89] found id: ""
	I1212 19:57:53.355617   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.355624   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:53.355629   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:53.355685   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:53.380364   54219 cri.go:89] found id: ""
	I1212 19:57:53.380378   54219 logs.go:282] 0 containers: []
	W1212 19:57:53.380385   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:53.380394   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:53.380405   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:53.448989   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:53.440655   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.441253   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.442753   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.443064   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:53.444518   13510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:53.449000   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:53.449010   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:53.516879   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:53.516908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:53.550642   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:53.550661   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:53.608676   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:53.608694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.120012   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:56.129790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:56.129852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:56.154949   54219 cri.go:89] found id: ""
	I1212 19:57:56.154963   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.154969   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:56.154974   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:56.155029   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:56.178218   54219 cri.go:89] found id: ""
	I1212 19:57:56.178232   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.178240   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:56.178254   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:56.178311   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:56.202037   54219 cri.go:89] found id: ""
	I1212 19:57:56.202053   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.202060   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:56.202065   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:56.202127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:56.226077   54219 cri.go:89] found id: ""
	I1212 19:57:56.226106   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.226114   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:56.226120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:56.226183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:56.249790   54219 cri.go:89] found id: ""
	I1212 19:57:56.249803   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.249810   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:56.249815   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:56.249868   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:56.273767   54219 cri.go:89] found id: ""
	I1212 19:57:56.273780   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.273787   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:56.273793   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:56.273851   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:56.301574   54219 cri.go:89] found id: ""
	I1212 19:57:56.301587   54219 logs.go:282] 0 containers: []
	W1212 19:57:56.301594   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:56.301602   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:56.301612   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:56.362705   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:56.362723   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:56.373142   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:56.373166   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:56.434197   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:56.426404   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.426921   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.428541   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.429015   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:56.430546   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:56.434207   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:56.434217   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:56.497280   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:56.497298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:57:59.029935   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:57:59.040115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:57:59.040173   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:57:59.064443   54219 cri.go:89] found id: ""
	I1212 19:57:59.064458   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.064465   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:57:59.064470   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:57:59.064525   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:57:59.089160   54219 cri.go:89] found id: ""
	I1212 19:57:59.089173   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.089180   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:57:59.089185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:57:59.089250   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:57:59.113771   54219 cri.go:89] found id: ""
	I1212 19:57:59.113785   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.113792   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:57:59.113797   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:57:59.113852   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:57:59.141148   54219 cri.go:89] found id: ""
	I1212 19:57:59.141162   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.141169   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:57:59.141174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:57:59.141241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:57:59.163991   54219 cri.go:89] found id: ""
	I1212 19:57:59.164005   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.164011   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:57:59.164016   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:57:59.164076   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:57:59.189011   54219 cri.go:89] found id: ""
	I1212 19:57:59.189026   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.189033   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:57:59.189038   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:57:59.189092   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:57:59.213106   54219 cri.go:89] found id: ""
	I1212 19:57:59.213119   54219 logs.go:282] 0 containers: []
	W1212 19:57:59.213125   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:57:59.213133   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:57:59.213143   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:57:59.268036   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:57:59.268054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:57:59.278468   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:57:59.278483   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:57:59.343881   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:57:59.335767   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.336563   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338140   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.338447   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:57:59.339954   13724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:57:59.343891   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:57:59.343909   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:57:59.406439   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:57:59.406457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:01.935967   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:01.947272   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:01.947331   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:01.980222   54219 cri.go:89] found id: ""
	I1212 19:58:01.980235   54219 logs.go:282] 0 containers: []
	W1212 19:58:01.980251   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:01.980257   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:01.980314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:02.009777   54219 cri.go:89] found id: ""
	I1212 19:58:02.009794   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.009802   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:02.009808   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:02.009899   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:02.042576   54219 cri.go:89] found id: ""
	I1212 19:58:02.042591   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.042598   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:02.042603   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:02.042680   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:02.067370   54219 cri.go:89] found id: ""
	I1212 19:58:02.067384   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.067392   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:02.067397   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:02.067462   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:02.096410   54219 cri.go:89] found id: ""
	I1212 19:58:02.096423   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.096430   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:02.096436   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:02.096495   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:02.120186   54219 cri.go:89] found id: ""
	I1212 19:58:02.120200   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.120207   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:02.120212   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:02.120272   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:02.146219   54219 cri.go:89] found id: ""
	I1212 19:58:02.146233   54219 logs.go:282] 0 containers: []
	W1212 19:58:02.146240   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:02.146264   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:02.146274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:02.203137   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:02.203156   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:02.214269   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:02.214290   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:02.282468   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:02.273826   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.274485   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276251   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.276887   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:02.278544   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:02.282477   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:02.282490   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:02.345078   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:02.345096   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:04.874398   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:04.884418   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:04.884477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:04.913524   54219 cri.go:89] found id: ""
	I1212 19:58:04.913537   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.913544   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:04.913596   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:04.913656   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:04.941905   54219 cri.go:89] found id: ""
	I1212 19:58:04.941919   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.941925   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:04.941930   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:04.941988   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:04.969529   54219 cri.go:89] found id: ""
	I1212 19:58:04.969549   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.969556   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:04.969561   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:04.969619   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:04.998159   54219 cri.go:89] found id: ""
	I1212 19:58:04.998173   54219 logs.go:282] 0 containers: []
	W1212 19:58:04.998180   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:04.998185   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:04.998241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:05.027027   54219 cri.go:89] found id: ""
	I1212 19:58:05.027042   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.027052   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:05.027057   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:05.027159   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:05.053821   54219 cri.go:89] found id: ""
	I1212 19:58:05.053834   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.053841   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:05.053847   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:05.053903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:05.078817   54219 cri.go:89] found id: ""
	I1212 19:58:05.078831   54219 logs.go:282] 0 containers: []
	W1212 19:58:05.078837   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:05.078845   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:05.078856   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:05.137908   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:05.137927   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:05.149843   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:05.149859   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:05.216435   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:05.208482   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.208883   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210371   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.210673   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:05.212119   13932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:05.216444   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:05.216454   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:05.281451   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:05.281469   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:07.809177   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:07.819079   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:07.819135   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:07.843677   54219 cri.go:89] found id: ""
	I1212 19:58:07.843691   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.843698   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:07.843703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:07.843763   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:07.873172   54219 cri.go:89] found id: ""
	I1212 19:58:07.873185   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.873192   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:07.873197   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:07.873251   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:07.898060   54219 cri.go:89] found id: ""
	I1212 19:58:07.898082   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.898090   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:07.898099   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:07.898157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:07.922099   54219 cri.go:89] found id: ""
	I1212 19:58:07.922113   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.922120   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:07.922131   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:07.922186   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:07.951267   54219 cri.go:89] found id: ""
	I1212 19:58:07.951281   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.951287   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:07.951292   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:07.951350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:07.979301   54219 cri.go:89] found id: ""
	I1212 19:58:07.979315   54219 logs.go:282] 0 containers: []
	W1212 19:58:07.979322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:07.979327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:07.979383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:08.016405   54219 cri.go:89] found id: ""
	I1212 19:58:08.016418   54219 logs.go:282] 0 containers: []
	W1212 19:58:08.016425   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:08.016433   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:08.016444   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:08.027858   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:08.027875   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:08.095861   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:08.086729   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.087573   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.088733   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.089465   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:08.091109   14034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:08.095872   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:08.095885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:08.159001   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:08.159019   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:08.186794   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:08.186812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.744419   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:10.755144   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:10.755202   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:10.778581   54219 cri.go:89] found id: ""
	I1212 19:58:10.778594   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.778601   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:10.778607   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:10.778663   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:10.802768   54219 cri.go:89] found id: ""
	I1212 19:58:10.802781   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.802787   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:10.802792   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:10.802850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:10.828295   54219 cri.go:89] found id: ""
	I1212 19:58:10.828309   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.828316   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:10.828321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:10.828374   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:10.851350   54219 cri.go:89] found id: ""
	I1212 19:58:10.851363   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.851370   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:10.851375   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:10.851429   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:10.879621   54219 cri.go:89] found id: ""
	I1212 19:58:10.879635   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.879641   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:10.879646   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:10.879700   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:10.905108   54219 cri.go:89] found id: ""
	I1212 19:58:10.905122   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.905129   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:10.905134   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:10.905191   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:10.928365   54219 cri.go:89] found id: ""
	I1212 19:58:10.928379   54219 logs.go:282] 0 containers: []
	W1212 19:58:10.928386   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:10.928394   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:10.928418   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:10.986372   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:10.986390   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:10.997450   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:10.997464   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:11.067488   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:11.059465   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.060118   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.061655   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.062199   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:11.063664   14141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:11.067499   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:11.067510   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:11.131069   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:11.131089   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:13.660595   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:13.670703   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:13.670762   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:13.694211   54219 cri.go:89] found id: ""
	I1212 19:58:13.694224   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.694231   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:13.694236   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:13.694291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:13.724541   54219 cri.go:89] found id: ""
	I1212 19:58:13.724554   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.724561   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:13.724566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:13.724625   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:13.750194   54219 cri.go:89] found id: ""
	I1212 19:58:13.750207   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.750214   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:13.750219   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:13.750277   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:13.774257   54219 cri.go:89] found id: ""
	I1212 19:58:13.774271   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.774278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:13.774283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:13.774338   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:13.799078   54219 cri.go:89] found id: ""
	I1212 19:58:13.799091   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.799097   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:13.799102   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:13.799158   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:13.822710   54219 cri.go:89] found id: ""
	I1212 19:58:13.822724   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.822730   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:13.822735   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:13.822791   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:13.849556   54219 cri.go:89] found id: ""
	I1212 19:58:13.849570   54219 logs.go:282] 0 containers: []
	W1212 19:58:13.849576   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:13.849584   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:13.849595   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:13.907383   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:13.907403   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:13.917866   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:13.917883   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:14.000449   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:13.992686   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.993186   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.994632   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.995157   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:13.996620   14245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:14.000458   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:14.000477   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:14.066367   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:14.066386   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:16.594682   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:16.604845   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:16.604903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:16.629471   54219 cri.go:89] found id: ""
	I1212 19:58:16.629485   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.629493   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:16.629498   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:16.629554   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:16.654890   54219 cri.go:89] found id: ""
	I1212 19:58:16.654904   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.654911   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:16.654916   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:16.654981   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:16.679283   54219 cri.go:89] found id: ""
	I1212 19:58:16.679297   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.679304   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:16.679309   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:16.679362   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:16.704043   54219 cri.go:89] found id: ""
	I1212 19:58:16.704057   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.704065   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:16.704070   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:16.704127   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:16.728139   54219 cri.go:89] found id: ""
	I1212 19:58:16.728153   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.728159   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:16.728164   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:16.728225   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:16.757814   54219 cri.go:89] found id: ""
	I1212 19:58:16.757829   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.757836   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:16.757841   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:16.757894   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:16.782420   54219 cri.go:89] found id: ""
	I1212 19:58:16.782433   54219 logs.go:282] 0 containers: []
	W1212 19:58:16.782441   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:16.782448   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:16.782458   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:16.841763   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:16.841780   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:16.852845   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:16.852861   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:16.920551   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:16.912049   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.912668   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914340   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.914862   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:16.916428   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:16.920561   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:16.920572   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:16.986769   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:16.986788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:19.527987   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:19.537931   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:19.537994   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:19.561363   54219 cri.go:89] found id: ""
	I1212 19:58:19.561377   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.561383   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:19.561389   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:19.561444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:19.584696   54219 cri.go:89] found id: ""
	I1212 19:58:19.584710   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.584717   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:19.584722   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:19.584783   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:19.608796   54219 cri.go:89] found id: ""
	I1212 19:58:19.608816   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.608829   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:19.608834   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:19.608888   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:19.633676   54219 cri.go:89] found id: ""
	I1212 19:58:19.633690   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.633697   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:19.633702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:19.633765   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:19.656537   54219 cri.go:89] found id: ""
	I1212 19:58:19.656550   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.656557   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:19.656562   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:19.656615   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:19.681676   54219 cri.go:89] found id: ""
	I1212 19:58:19.681689   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.681696   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:19.681701   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:19.681756   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:19.704747   54219 cri.go:89] found id: ""
	I1212 19:58:19.704761   54219 logs.go:282] 0 containers: []
	W1212 19:58:19.704768   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:19.704775   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:19.704785   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:19.760344   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:19.760360   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:19.770729   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:19.770745   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:19.834442   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:19.826076   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.826835   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828378   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.828837   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:19.830354   14456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:19.834452   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:19.834462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:19.897417   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:19.897437   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:22.424308   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:22.434481   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:22.434537   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:22.458765   54219 cri.go:89] found id: ""
	I1212 19:58:22.458778   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.458785   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:22.458790   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:22.458844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:22.486364   54219 cri.go:89] found id: ""
	I1212 19:58:22.486378   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.486385   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:22.486403   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:22.486469   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:22.518554   54219 cri.go:89] found id: ""
	I1212 19:58:22.518567   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.518575   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:22.518579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:22.518648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:22.543164   54219 cri.go:89] found id: ""
	I1212 19:58:22.543178   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.543185   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:22.543190   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:22.543266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:22.567677   54219 cri.go:89] found id: ""
	I1212 19:58:22.567691   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.567697   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:22.567702   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:22.567757   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:22.594216   54219 cri.go:89] found id: ""
	I1212 19:58:22.594230   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.594237   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:22.594242   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:22.594310   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:22.622007   54219 cri.go:89] found id: ""
	I1212 19:58:22.622021   54219 logs.go:282] 0 containers: []
	W1212 19:58:22.622028   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:22.622036   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:22.622046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:22.684696   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:22.684719   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:22.696409   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:22.696425   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:22.763719   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:22.755358   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.756087   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.757853   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.758404   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:22.759874   14560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:22.763730   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:22.763742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:22.828220   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:22.828242   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.355355   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:25.367957   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:25.368041   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:25.394847   54219 cri.go:89] found id: ""
	I1212 19:58:25.394861   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.394868   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:25.394873   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:25.394928   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:25.419394   54219 cri.go:89] found id: ""
	I1212 19:58:25.419408   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.419414   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:25.419419   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:25.419477   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:25.444373   54219 cri.go:89] found id: ""
	I1212 19:58:25.444386   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.444393   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:25.444398   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:25.444455   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:25.467872   54219 cri.go:89] found id: ""
	I1212 19:58:25.467886   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.467892   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:25.467897   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:25.467952   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:25.491493   54219 cri.go:89] found id: ""
	I1212 19:58:25.491507   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.491514   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:25.491519   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:25.491575   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:25.515809   54219 cri.go:89] found id: ""
	I1212 19:58:25.515832   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.515864   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:25.515869   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:25.515939   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:25.540733   54219 cri.go:89] found id: ""
	I1212 19:58:25.540747   54219 logs.go:282] 0 containers: []
	W1212 19:58:25.540754   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:25.540762   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:25.540773   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:25.551372   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:25.551387   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:25.613099   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:25.604731   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.605382   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607062   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.607684   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:25.609352   14664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:25.613109   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:25.613119   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:25.674835   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:25.674854   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:25.702894   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:25.702910   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.260731   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:28.270423   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:28.270480   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:28.297804   54219 cri.go:89] found id: ""
	I1212 19:58:28.297818   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.297825   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:28.297830   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:28.297887   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:28.322143   54219 cri.go:89] found id: ""
	I1212 19:58:28.322157   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.322164   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:28.322169   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:28.322223   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:28.346215   54219 cri.go:89] found id: ""
	I1212 19:58:28.346229   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.346236   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:28.346241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:28.346297   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:28.370542   54219 cri.go:89] found id: ""
	I1212 19:58:28.370556   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.370563   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:28.370574   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:28.370634   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:28.397655   54219 cri.go:89] found id: ""
	I1212 19:58:28.397670   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.397677   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:28.397682   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:28.397737   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:28.421548   54219 cri.go:89] found id: ""
	I1212 19:58:28.421561   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.421568   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:28.421573   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:28.421627   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:28.445812   54219 cri.go:89] found id: ""
	I1212 19:58:28.445826   54219 logs.go:282] 0 containers: []
	W1212 19:58:28.445833   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:28.445840   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:28.445850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:28.501608   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:28.501625   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:28.513441   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:28.513494   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:28.582207   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:28.574455   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.574891   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576467   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.576813   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:28.578288   14770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:28.582217   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:28.582229   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:28.644833   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:28.644850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:31.174256   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:31.184503   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:31.184561   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:31.220119   54219 cri.go:89] found id: ""
	I1212 19:58:31.220139   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.220147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:31.220158   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:31.220226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:31.253789   54219 cri.go:89] found id: ""
	I1212 19:58:31.253802   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.253815   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:31.253825   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:31.253884   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:31.279880   54219 cri.go:89] found id: ""
	I1212 19:58:31.279899   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.279906   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:31.279911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:31.279965   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:31.304491   54219 cri.go:89] found id: ""
	I1212 19:58:31.304504   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.304511   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:31.304515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:31.304569   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:31.331430   54219 cri.go:89] found id: ""
	I1212 19:58:31.331444   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.331451   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:31.331456   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:31.331510   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:31.357552   54219 cri.go:89] found id: ""
	I1212 19:58:31.357566   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.357572   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:31.357577   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:31.357633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:31.381902   54219 cri.go:89] found id: ""
	I1212 19:58:31.381916   54219 logs.go:282] 0 containers: []
	W1212 19:58:31.381923   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:31.381930   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:31.381940   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:31.437813   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:31.437831   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:31.448492   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:31.448509   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:31.513035   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:31.504749   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.505284   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.506766   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.507301   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:31.509069   14874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:31.513045   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:31.513056   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:31.574565   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:31.574584   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:34.102253   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:34.112554   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:34.112620   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:34.137461   54219 cri.go:89] found id: ""
	I1212 19:58:34.137475   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.137482   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:34.137487   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:34.137541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:34.166138   54219 cri.go:89] found id: ""
	I1212 19:58:34.166161   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.166169   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:34.166174   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:34.166234   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:34.193829   54219 cri.go:89] found id: ""
	I1212 19:58:34.193842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.193849   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:34.193854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:34.193906   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:34.231695   54219 cri.go:89] found id: ""
	I1212 19:58:34.231708   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.231716   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:34.231721   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:34.231777   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:34.264331   54219 cri.go:89] found id: ""
	I1212 19:58:34.264344   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.264351   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:34.264356   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:34.264412   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:34.288829   54219 cri.go:89] found id: ""
	I1212 19:58:34.288842   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.288849   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:34.288854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:34.288908   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:34.316442   54219 cri.go:89] found id: ""
	I1212 19:58:34.316456   54219 logs.go:282] 0 containers: []
	W1212 19:58:34.316463   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:34.316471   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:34.316481   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:34.376058   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:34.376076   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:34.386998   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:34.387013   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:34.452379   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:34.443685   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.444192   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.445866   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.446403   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:34.448108   14982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:34.452390   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:34.452401   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:34.514653   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:34.514671   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:37.042798   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:37.053097   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:37.053156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:37.076591   54219 cri.go:89] found id: ""
	I1212 19:58:37.076604   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.076611   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:37.076616   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:37.076674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:37.099322   54219 cri.go:89] found id: ""
	I1212 19:58:37.099335   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.099342   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:37.099348   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:37.099402   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:37.123234   54219 cri.go:89] found id: ""
	I1212 19:58:37.123248   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.123255   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:37.123260   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:37.123314   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:37.147746   54219 cri.go:89] found id: ""
	I1212 19:58:37.147760   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.147767   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:37.147772   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:37.147827   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:37.173059   54219 cri.go:89] found id: ""
	I1212 19:58:37.173072   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.173079   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:37.173084   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:37.173141   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:37.208173   54219 cri.go:89] found id: ""
	I1212 19:58:37.208192   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.208199   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:37.208204   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:37.208263   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:37.239049   54219 cri.go:89] found id: ""
	I1212 19:58:37.239063   54219 logs.go:282] 0 containers: []
	W1212 19:58:37.239070   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:37.239078   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:37.239088   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:37.297849   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:37.297866   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:37.309078   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:37.309092   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:37.375029   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:37.367053   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.367567   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369321   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.369766   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:37.371297   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:37.375038   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:37.375050   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:37.436797   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:37.436815   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:39.970179   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:39.980227   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:39.980293   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:40.004882   54219 cri.go:89] found id: ""
	I1212 19:58:40.004896   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.004903   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:40.004907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:40.004963   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:40.066617   54219 cri.go:89] found id: ""
	I1212 19:58:40.066632   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.066640   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:40.066645   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:40.066717   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:40.102654   54219 cri.go:89] found id: ""
	I1212 19:58:40.102669   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.102676   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:40.102681   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:40.102745   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:40.133625   54219 cri.go:89] found id: ""
	I1212 19:58:40.133640   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.133648   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:40.133654   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:40.133723   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:40.166821   54219 cri.go:89] found id: ""
	I1212 19:58:40.166845   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.166853   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:40.166858   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:40.166927   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:40.195477   54219 cri.go:89] found id: ""
	I1212 19:58:40.195500   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.195509   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:40.195515   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:40.195580   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:40.225935   54219 cri.go:89] found id: ""
	I1212 19:58:40.225949   54219 logs.go:282] 0 containers: []
	W1212 19:58:40.225967   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:40.225976   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:40.225986   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:40.302829   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:40.294976   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.295352   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.296835   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.297228   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:40.298715   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:40.302839   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:40.302850   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:40.365532   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:40.365552   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:40.400282   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:40.400298   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:40.460370   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:40.460389   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:42.971593   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:42.981866   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:42.981931   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:43.006660   54219 cri.go:89] found id: ""
	I1212 19:58:43.006674   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.006690   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:43.006696   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:43.006753   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:43.033557   54219 cri.go:89] found id: ""
	I1212 19:58:43.033571   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.033578   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:43.033583   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:43.033643   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:43.062054   54219 cri.go:89] found id: ""
	I1212 19:58:43.062067   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.062073   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:43.062078   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:43.062139   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:43.086826   54219 cri.go:89] found id: ""
	I1212 19:58:43.086841   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.086849   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:43.086854   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:43.086920   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:43.112001   54219 cri.go:89] found id: ""
	I1212 19:58:43.112015   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.112022   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:43.112027   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:43.112099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:43.137727   54219 cri.go:89] found id: ""
	I1212 19:58:43.137741   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.137748   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:43.137753   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:43.137811   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:43.163693   54219 cri.go:89] found id: ""
	I1212 19:58:43.163707   54219 logs.go:282] 0 containers: []
	W1212 19:58:43.163714   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:43.163731   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:43.163742   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:43.174602   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:43.174617   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:43.254196   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:43.243179   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.243697   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245358   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.245738   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:43.247174   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:43.254213   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:43.254224   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:43.321187   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:43.321206   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:43.353090   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:43.353105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:45.910450   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:45.920312   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:45.920373   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:45.942607   54219 cri.go:89] found id: ""
	I1212 19:58:45.942620   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.942627   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:45.942632   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:45.942688   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:45.966155   54219 cri.go:89] found id: ""
	I1212 19:58:45.966168   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.966175   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:45.966179   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:45.966235   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:45.989218   54219 cri.go:89] found id: ""
	I1212 19:58:45.989232   54219 logs.go:282] 0 containers: []
	W1212 19:58:45.989239   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:45.989243   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:45.989298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:46.016207   54219 cri.go:89] found id: ""
	I1212 19:58:46.016222   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.016228   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:46.016234   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:46.016291   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:46.045554   54219 cri.go:89] found id: ""
	I1212 19:58:46.045569   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.045576   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:46.045581   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:46.045635   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:46.069843   54219 cri.go:89] found id: ""
	I1212 19:58:46.069856   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.069865   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:46.069870   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:46.069924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:46.093840   54219 cri.go:89] found id: ""
	I1212 19:58:46.093854   54219 logs.go:282] 0 containers: []
	W1212 19:58:46.093860   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:46.093869   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:46.093878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:46.149331   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:46.149349   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:46.159907   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:46.159924   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:46.230481   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:46.222609   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.223298   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.224445   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.225036   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:46.226545   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:46.230490   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:46.230502   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:46.300039   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:46.300060   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:48.829920   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:48.840025   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:48.840080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:48.869553   54219 cri.go:89] found id: ""
	I1212 19:58:48.869567   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.869574   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:48.869579   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:48.869633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:48.894185   54219 cri.go:89] found id: ""
	I1212 19:58:48.894199   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.894205   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:48.894220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:48.894280   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:48.918726   54219 cri.go:89] found id: ""
	I1212 19:58:48.918740   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.918752   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:48.918757   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:48.918814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:48.943092   54219 cri.go:89] found id: ""
	I1212 19:58:48.943106   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.943113   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:48.943118   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:48.943172   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:48.967616   54219 cri.go:89] found id: ""
	I1212 19:58:48.967630   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.967637   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:48.967642   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:48.967697   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:48.993271   54219 cri.go:89] found id: ""
	I1212 19:58:48.993284   54219 logs.go:282] 0 containers: []
	W1212 19:58:48.993291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:48.993296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:48.993355   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:49.018337   54219 cri.go:89] found id: ""
	I1212 19:58:49.018359   54219 logs.go:282] 0 containers: []
	W1212 19:58:49.018376   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:49.018386   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:49.018395   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:49.074620   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:49.074637   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:49.085360   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:49.085378   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:49.147253   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:49.138899   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.139468   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141426   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.141871   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:49.143362   15501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:49.147263   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:49.147274   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:49.215977   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:49.215996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:51.751688   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:51.761744   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:51.761806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:51.790227   54219 cri.go:89] found id: ""
	I1212 19:58:51.790241   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.790248   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:51.790253   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:51.790309   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:51.817250   54219 cri.go:89] found id: ""
	I1212 19:58:51.817264   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.817271   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:51.817276   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:51.817346   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:51.842832   54219 cri.go:89] found id: ""
	I1212 19:58:51.842845   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.842851   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:51.842856   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:51.842916   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:51.867234   54219 cri.go:89] found id: ""
	I1212 19:58:51.867249   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.867256   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:51.867261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:51.867315   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:51.895349   54219 cri.go:89] found id: ""
	I1212 19:58:51.895364   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.895371   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:51.895376   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:51.895432   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:51.920577   54219 cri.go:89] found id: ""
	I1212 19:58:51.920594   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.920603   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:51.920612   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:51.920674   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:51.945231   54219 cri.go:89] found id: ""
	I1212 19:58:51.945244   54219 logs.go:282] 0 containers: []
	W1212 19:58:51.945251   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:51.945258   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:51.945268   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:52.004677   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:52.004694   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:52.018082   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:52.018098   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:52.085848   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:52.076633   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.077498   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079211   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.079913   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:52.081677   15607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:52.085859   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:52.085869   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:52.155168   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:52.155196   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.685430   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:54.695280   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:54.695335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:54.720974   54219 cri.go:89] found id: ""
	I1212 19:58:54.720988   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.720994   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:54.721001   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:54.721063   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:54.744863   54219 cri.go:89] found id: ""
	I1212 19:58:54.744876   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.744883   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:54.744888   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:54.744943   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:54.768441   54219 cri.go:89] found id: ""
	I1212 19:58:54.768454   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.768461   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:54.768465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:54.768520   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:54.797540   54219 cri.go:89] found id: ""
	I1212 19:58:54.797554   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.797561   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:54.797566   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:54.797633   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:54.825756   54219 cri.go:89] found id: ""
	I1212 19:58:54.825770   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.825776   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:54.825782   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:54.825850   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:54.853837   54219 cri.go:89] found id: ""
	I1212 19:58:54.853850   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.853857   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:54.853867   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:54.853921   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:54.880854   54219 cri.go:89] found id: ""
	I1212 19:58:54.880868   54219 logs.go:282] 0 containers: []
	W1212 19:58:54.880874   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:54.880882   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:54.880892   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:54.908639   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:54.908655   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:54.965093   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:54.965111   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:54.976121   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:54.976137   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:55.044063   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:55.035541   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.036437   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038095   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.038458   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:55.040134   15722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:55.044074   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:55.044085   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:58:57.606891   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:58:57.617246   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:58:57.617305   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:58:57.641248   54219 cri.go:89] found id: ""
	I1212 19:58:57.641261   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.641269   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:58:57.641274   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:58:57.641336   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:58:57.666129   54219 cri.go:89] found id: ""
	I1212 19:58:57.666160   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.666167   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:58:57.666171   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:58:57.666226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:58:57.690889   54219 cri.go:89] found id: ""
	I1212 19:58:57.690902   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.690913   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:58:57.690918   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:58:57.690974   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:58:57.719998   54219 cri.go:89] found id: ""
	I1212 19:58:57.720012   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.720019   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:58:57.720024   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:58:57.720080   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:58:57.745021   54219 cri.go:89] found id: ""
	I1212 19:58:57.745034   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.745041   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:58:57.745046   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:58:57.745102   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:58:57.769302   54219 cri.go:89] found id: ""
	I1212 19:58:57.769316   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.769322   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:58:57.769327   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:58:57.769383   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:58:57.792874   54219 cri.go:89] found id: ""
	I1212 19:58:57.792887   54219 logs.go:282] 0 containers: []
	W1212 19:58:57.792894   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:58:57.792902   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:58:57.792913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:58:57.821987   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:58:57.822003   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:58:57.878403   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:58:57.878420   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:58:57.889240   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:58:57.889255   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:58:57.955924   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:58:57.946885   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.947418   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949013   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.949699   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:58:57.951375   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:58:57.955936   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:58:57.955948   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.519976   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:00.530412   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:00.530471   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:00.562296   54219 cri.go:89] found id: ""
	I1212 19:59:00.562309   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.562316   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:00.562321   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:00.562381   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:00.590126   54219 cri.go:89] found id: ""
	I1212 19:59:00.590140   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.590147   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:00.590152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:00.590208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:00.618262   54219 cri.go:89] found id: ""
	I1212 19:59:00.618276   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.618282   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:00.618287   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:00.618350   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:00.643416   54219 cri.go:89] found id: ""
	I1212 19:59:00.643430   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.643437   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:00.643442   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:00.643497   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:00.668447   54219 cri.go:89] found id: ""
	I1212 19:59:00.668461   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.668469   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:00.668474   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:00.668534   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:00.695735   54219 cri.go:89] found id: ""
	I1212 19:59:00.695748   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.695755   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:00.695760   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:00.695820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:00.729197   54219 cri.go:89] found id: ""
	I1212 19:59:00.729211   54219 logs.go:282] 0 containers: []
	W1212 19:59:00.729219   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:00.729226   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:00.729237   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:00.739980   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:00.739996   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:00.812904   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:00.804740   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.805626   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.806481   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.807322   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:00.809016   15919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:00.812914   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:00.812925   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:00.876760   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:00.876778   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:00.905954   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:00.905970   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.466026   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:03.476441   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:03.476505   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:03.512755   54219 cri.go:89] found id: ""
	I1212 19:59:03.512774   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.512781   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:03.512786   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:03.512844   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:03.536972   54219 cri.go:89] found id: ""
	I1212 19:59:03.536992   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.536999   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:03.537004   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:03.537071   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:03.564981   54219 cri.go:89] found id: ""
	I1212 19:59:03.564995   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.565002   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:03.565006   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:03.565061   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:03.589258   54219 cri.go:89] found id: ""
	I1212 19:59:03.589271   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.589278   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:03.589283   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:03.589335   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:03.617627   54219 cri.go:89] found id: ""
	I1212 19:59:03.617649   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.617656   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:03.617661   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:03.617724   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:03.643124   54219 cri.go:89] found id: ""
	I1212 19:59:03.643137   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.643144   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:03.643149   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:03.643205   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:03.667587   54219 cri.go:89] found id: ""
	I1212 19:59:03.667601   54219 logs.go:282] 0 containers: []
	W1212 19:59:03.667607   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:03.667615   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:03.667624   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:03.724310   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:03.724326   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:03.735089   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:03.735105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:03.799034   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:03.791373   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.792104   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793630   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.793918   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:03.795356   16026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:03.799043   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:03.799054   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:03.861867   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:03.861885   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:06.393541   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:06.403453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:06.403511   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:06.427440   54219 cri.go:89] found id: ""
	I1212 19:59:06.427454   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.427460   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:06.427465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:06.427524   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:06.457341   54219 cri.go:89] found id: ""
	I1212 19:59:06.457355   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.457361   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:06.457366   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:06.457424   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:06.495095   54219 cri.go:89] found id: ""
	I1212 19:59:06.495110   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.495116   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:06.495122   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:06.495179   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:06.522006   54219 cri.go:89] found id: ""
	I1212 19:59:06.522041   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.522048   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:06.522053   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:06.522111   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:06.551005   54219 cri.go:89] found id: ""
	I1212 19:59:06.551019   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.551026   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:06.551031   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:06.551099   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:06.576063   54219 cri.go:89] found id: ""
	I1212 19:59:06.576089   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.576096   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:06.576101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:06.576157   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:06.601543   54219 cri.go:89] found id: ""
	I1212 19:59:06.601557   54219 logs.go:282] 0 containers: []
	W1212 19:59:06.601565   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:06.601572   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:06.601582   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:06.657957   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:06.657977   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:06.668650   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:06.668665   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:06.730730   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:06.722725   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.723501   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725053   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.725374   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:06.726867   16129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:06.730739   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:06.730749   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:06.793201   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:06.793219   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:09.321790   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:09.332762   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:09.332820   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:09.359927   54219 cri.go:89] found id: ""
	I1212 19:59:09.359941   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.359948   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:09.359953   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:09.360026   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:09.385111   54219 cri.go:89] found id: ""
	I1212 19:59:09.385125   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.385137   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:09.385142   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:09.385201   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:09.416991   54219 cri.go:89] found id: ""
	I1212 19:59:09.417006   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.417013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:09.417018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:09.417077   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:09.442593   54219 cri.go:89] found id: ""
	I1212 19:59:09.442606   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.442612   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:09.442617   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:09.442672   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:09.469724   54219 cri.go:89] found id: ""
	I1212 19:59:09.469738   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.469745   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:09.469750   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:09.469806   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:09.506134   54219 cri.go:89] found id: ""
	I1212 19:59:09.506148   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.506154   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:09.506160   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:09.506226   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:09.537548   54219 cri.go:89] found id: ""
	I1212 19:59:09.537561   54219 logs.go:282] 0 containers: []
	W1212 19:59:09.537568   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:09.537576   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:09.537585   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:09.596110   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:09.596128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:09.607356   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:09.607373   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:09.678885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:09.670805   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.671533   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673167   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.673470   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:09.674917   16234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:09.678895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:09.678906   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:09.744120   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:09.744138   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:12.273229   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:12.283400   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:12.283456   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:12.307126   54219 cri.go:89] found id: ""
	I1212 19:59:12.307140   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.307147   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:12.307152   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:12.307208   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:12.333237   54219 cri.go:89] found id: ""
	I1212 19:59:12.333250   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.333257   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:12.333261   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:12.333318   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:12.357336   54219 cri.go:89] found id: ""
	I1212 19:59:12.357349   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.357356   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:12.357361   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:12.357416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:12.382066   54219 cri.go:89] found id: ""
	I1212 19:59:12.382080   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.382086   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:12.382091   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:12.382147   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:12.406069   54219 cri.go:89] found id: ""
	I1212 19:59:12.406082   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.406089   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:12.406094   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:12.406149   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:12.434345   54219 cri.go:89] found id: ""
	I1212 19:59:12.434365   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.434372   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:12.434377   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:12.434457   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:12.466422   54219 cri.go:89] found id: ""
	I1212 19:59:12.466436   54219 logs.go:282] 0 containers: []
	W1212 19:59:12.466444   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:12.466451   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:12.466462   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:12.528768   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:12.528787   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:12.541490   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:12.541508   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:12.602589   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:12.594584   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.594975   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596484   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.596787   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:12.598425   16340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:12.602599   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:12.602609   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:12.664894   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:12.664913   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:15.192235   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:15.202664   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:15.202722   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:15.227464   54219 cri.go:89] found id: ""
	I1212 19:59:15.227477   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.227484   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:15.227489   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:15.227545   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:15.251075   54219 cri.go:89] found id: ""
	I1212 19:59:15.251089   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.251096   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:15.251101   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:15.251156   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:15.275993   54219 cri.go:89] found id: ""
	I1212 19:59:15.276006   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.276013   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:15.276018   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:15.276075   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:15.299883   54219 cri.go:89] found id: ""
	I1212 19:59:15.299896   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.299903   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:15.299908   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:15.299961   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:15.324623   54219 cri.go:89] found id: ""
	I1212 19:59:15.324636   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.324642   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:15.324647   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:15.324702   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:15.350461   54219 cri.go:89] found id: ""
	I1212 19:59:15.350474   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.350481   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:15.350486   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:15.350541   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:15.375380   54219 cri.go:89] found id: ""
	I1212 19:59:15.375407   54219 logs.go:282] 0 containers: []
	W1212 19:59:15.375415   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:15.375423   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:15.375434   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:15.431649   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:15.431669   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:15.444811   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:15.444836   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:15.537885   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:15.529076   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.529839   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.530552   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532384   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:15.532848   16447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:15.537895   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:15.537908   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:15.604300   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:15.604319   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:18.136615   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:18.146971   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:18.147036   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:18.176330   54219 cri.go:89] found id: ""
	I1212 19:59:18.176344   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.176351   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:18.176359   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:18.176416   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:18.200844   54219 cri.go:89] found id: ""
	I1212 19:59:18.200857   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.200863   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:18.200868   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:18.200924   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:18.224026   54219 cri.go:89] found id: ""
	I1212 19:59:18.224040   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.224046   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:18.224051   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:18.224107   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:18.252073   54219 cri.go:89] found id: ""
	I1212 19:59:18.252086   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.252093   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:18.252098   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:18.252153   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:18.277440   54219 cri.go:89] found id: ""
	I1212 19:59:18.277454   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.277460   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:18.277465   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:18.277521   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:18.302183   54219 cri.go:89] found id: ""
	I1212 19:59:18.302197   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.302214   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:18.302220   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:18.302286   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:18.326037   54219 cri.go:89] found id: ""
	I1212 19:59:18.326058   54219 logs.go:282] 0 containers: []
	W1212 19:59:18.326065   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:18.326073   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:18.326083   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:18.380825   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:18.380843   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:18.391618   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:18.391634   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:18.463287   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:18.454358   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.455450   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457129   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.457425   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:18.459011   16547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:18.463297   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:18.463309   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:18.536948   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:18.536967   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:21.064758   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:21.074846   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:21.074903   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:21.099031   54219 cri.go:89] found id: ""
	I1212 19:59:21.099044   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.099051   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:21.099056   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:21.099109   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:21.123108   54219 cri.go:89] found id: ""
	I1212 19:59:21.123121   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.123127   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:21.123132   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:21.123187   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:21.146869   54219 cri.go:89] found id: ""
	I1212 19:59:21.146883   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.146890   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:21.146895   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:21.146964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:21.171309   54219 cri.go:89] found id: ""
	I1212 19:59:21.171323   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.171329   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:21.171340   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:21.171395   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:21.195200   54219 cri.go:89] found id: ""
	I1212 19:59:21.195213   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.195219   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:21.195224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:21.195282   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:21.218648   54219 cri.go:89] found id: ""
	I1212 19:59:21.218661   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.218668   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:21.218673   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:21.218726   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:21.243375   54219 cri.go:89] found id: ""
	I1212 19:59:21.243388   54219 logs.go:282] 0 containers: []
	W1212 19:59:21.243395   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:21.243402   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:21.243411   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:21.299185   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:21.299202   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:21.309826   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:21.309840   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:21.373437   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:21.365006   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.365633   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367303   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.367959   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:21.369725   16654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:21.373447   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:21.373457   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:21.435817   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:21.435878   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:23.968994   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:23.978907   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:23.978964   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:24.004005   54219 cri.go:89] found id: ""
	I1212 19:59:24.004018   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.004025   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:24.004030   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:24.004085   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:24.031561   54219 cri.go:89] found id: ""
	I1212 19:59:24.031576   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.031583   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:24.031588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:24.031648   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:24.058089   54219 cri.go:89] found id: ""
	I1212 19:59:24.058105   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.058113   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:24.058120   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:24.058183   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:24.083693   54219 cri.go:89] found id: ""
	I1212 19:59:24.083707   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.083713   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:24.083718   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:24.083774   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:24.110732   54219 cri.go:89] found id: ""
	I1212 19:59:24.110746   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.110753   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:24.110758   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:24.110814   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:24.135252   54219 cri.go:89] found id: ""
	I1212 19:59:24.135266   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.135273   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:24.135278   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:24.135330   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:24.158751   54219 cri.go:89] found id: ""
	I1212 19:59:24.158765   54219 logs.go:282] 0 containers: []
	W1212 19:59:24.158771   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:24.158779   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:24.158788   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:24.188496   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:24.188513   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:24.244683   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:24.244701   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:24.255424   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:24.255440   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:24.324102   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:24.316334   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.316892   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.318479   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.319116   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:24.320192   16771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:24.324113   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:24.324126   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:26.896008   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:26.906451   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:26.906508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:26.930525   54219 cri.go:89] found id: ""
	I1212 19:59:26.930538   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.930546   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:26.930551   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:26.930607   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:26.954197   54219 cri.go:89] found id: ""
	I1212 19:59:26.954212   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.954219   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:26.954224   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:26.954284   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:26.978362   54219 cri.go:89] found id: ""
	I1212 19:59:26.978375   54219 logs.go:282] 0 containers: []
	W1212 19:59:26.978381   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:26.978388   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:26.978444   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:27.003156   54219 cri.go:89] found id: ""
	I1212 19:59:27.003170   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.003177   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:27.003182   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:27.003241   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:27.035090   54219 cri.go:89] found id: ""
	I1212 19:59:27.035103   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.035110   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:27.035115   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:27.035170   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:27.059270   54219 cri.go:89] found id: ""
	I1212 19:59:27.059284   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.059291   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:27.059296   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:27.059351   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:27.083068   54219 cri.go:89] found id: ""
	I1212 19:59:27.083081   54219 logs.go:282] 0 containers: []
	W1212 19:59:27.083088   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:27.083096   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:27.083105   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:27.138962   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:27.138979   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:27.149646   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:27.149662   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:27.216025   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:27.207685   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.208329   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210138   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.210711   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:27.212312   16865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:27.216036   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:27.216046   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:27.277808   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:27.277826   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:29.806087   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:29.816453   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:29.816508   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:29.839921   54219 cri.go:89] found id: ""
	I1212 19:59:29.839935   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.839943   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:29.839950   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:29.840023   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:29.868215   54219 cri.go:89] found id: ""
	I1212 19:59:29.868229   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.868236   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:29.868241   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:29.868298   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:29.892199   54219 cri.go:89] found id: ""
	I1212 19:59:29.892212   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.892219   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:29.892226   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:29.892281   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:29.921316   54219 cri.go:89] found id: ""
	I1212 19:59:29.921330   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.921336   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:29.921351   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:29.921415   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:29.946039   54219 cri.go:89] found id: ""
	I1212 19:59:29.946053   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.946059   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:29.946064   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:29.946125   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:29.976514   54219 cri.go:89] found id: ""
	I1212 19:59:29.976528   54219 logs.go:282] 0 containers: []
	W1212 19:59:29.976536   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:29.976541   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:29.976601   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:30.000755   54219 cri.go:89] found id: ""
	I1212 19:59:30.000768   54219 logs.go:282] 0 containers: []
	W1212 19:59:30.000775   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:30.000783   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:30.000793   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:30.058301   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:30.058321   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:30.070295   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:30.070312   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:30.139764   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:30.131062   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.131753   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.133476   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.134278   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:30.135896   16970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:30.139775   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:30.139786   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:30.203348   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:30.203371   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:32.732603   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:32.743210   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 19:59:32.743266   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 19:59:32.775589   54219 cri.go:89] found id: ""
	I1212 19:59:32.775603   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.775610   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 19:59:32.775614   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 19:59:32.775673   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 19:59:32.799717   54219 cri.go:89] found id: ""
	I1212 19:59:32.799730   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.799737   54219 logs.go:284] No container was found matching "etcd"
	I1212 19:59:32.799742   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 19:59:32.799801   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 19:59:32.826819   54219 cri.go:89] found id: ""
	I1212 19:59:32.826832   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.826839   54219 logs.go:284] No container was found matching "coredns"
	I1212 19:59:32.826844   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 19:59:32.826902   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 19:59:32.851752   54219 cri.go:89] found id: ""
	I1212 19:59:32.851765   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.851772   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 19:59:32.851777   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 19:59:32.851832   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 19:59:32.876003   54219 cri.go:89] found id: ""
	I1212 19:59:32.876017   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.876024   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 19:59:32.876035   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 19:59:32.876093   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 19:59:32.902460   54219 cri.go:89] found id: ""
	I1212 19:59:32.902474   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.902480   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 19:59:32.902504   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 19:59:32.902560   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 19:59:32.925773   54219 cri.go:89] found id: ""
	I1212 19:59:32.925787   54219 logs.go:282] 0 containers: []
	W1212 19:59:32.925793   54219 logs.go:284] No container was found matching "kindnet"
	I1212 19:59:32.925802   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 19:59:32.925812   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 19:59:32.936160   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 19:59:32.936177   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 19:59:33.000494   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 19:59:32.992160   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.992913   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994556   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.994894   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 19:59:32.996429   17072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 19:59:33.000505   54219 logs.go:123] Gathering logs for containerd ...
	I1212 19:59:33.000515   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 19:59:33.066244   54219 logs.go:123] Gathering logs for container status ...
	I1212 19:59:33.066264   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 19:59:33.096113   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 19:59:33.096128   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 19:59:35.653289   54219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:59:35.663651   54219 kubeadm.go:602] duration metric: took 4m3.519380388s to restartPrimaryControlPlane
	W1212 19:59:35.663714   54219 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 19:59:35.663796   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 19:59:36.078838   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:59:36.092917   54219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:59:36.101391   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 19:59:36.101446   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:59:36.109781   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:59:36.109792   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 19:59:36.109842   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 19:59:36.118044   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:59:36.118100   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:59:36.125732   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 19:59:36.133647   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:59:36.133711   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:59:36.141349   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.149338   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:59:36.149401   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:59:36.156798   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 19:59:36.164406   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:59:36.164460   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:59:36.171816   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 19:59:36.215707   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 19:59:36.215925   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:59:36.287068   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 19:59:36.287132   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 19:59:36.287172   54219 kubeadm.go:319] OS: Linux
	I1212 19:59:36.287216   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 19:59:36.287263   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 19:59:36.287309   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 19:59:36.287356   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 19:59:36.287415   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 19:59:36.287462   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 19:59:36.287505   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 19:59:36.287552   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 19:59:36.287596   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 19:59:36.350092   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:59:36.350201   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:59:36.350291   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:59:36.357029   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:59:36.360551   54219 out.go:252]   - Generating certificates and keys ...
	I1212 19:59:36.360649   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:59:36.360718   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:59:36.360805   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 19:59:36.360872   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 19:59:36.360946   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 19:59:36.361003   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 19:59:36.361117   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 19:59:36.361314   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 19:59:36.361808   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 19:59:36.362227   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 19:59:36.362588   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 19:59:36.362716   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:59:36.513194   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:59:36.762182   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:59:37.087768   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:59:37.827220   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:59:38.025150   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:59:38.026038   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:59:38.030783   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:59:38.034177   54219 out.go:252]   - Booting up control plane ...
	I1212 19:59:38.034305   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:59:38.035144   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:59:38.036428   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:59:38.058524   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:59:38.058720   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:59:38.067348   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:59:38.067823   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:59:38.067969   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:59:38.202645   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:59:38.202775   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:03:38.203202   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000900998s
	I1212 20:03:38.203226   54219 kubeadm.go:319] 
	I1212 20:03:38.203283   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:03:38.203315   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:03:38.203419   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:03:38.203424   54219 kubeadm.go:319] 
	I1212 20:03:38.203527   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:03:38.203558   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:03:38.203588   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:03:38.203591   54219 kubeadm.go:319] 
	I1212 20:03:38.208746   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:03:38.209173   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:03:38.209280   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:03:38.209544   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:03:38.209548   54219 kubeadm.go:319] 
	I1212 20:03:38.209616   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:03:38.209718   54219 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000900998s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:03:38.209803   54219 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 20:03:38.624272   54219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:03:38.637409   54219 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:03:38.637464   54219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:03:38.645037   54219 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:03:38.645047   54219 kubeadm.go:158] found existing configuration files:
	
	I1212 20:03:38.645093   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:03:38.652503   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:03:38.652568   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:03:38.659596   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:03:38.667127   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:03:38.667190   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:03:38.674737   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.682321   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:03:38.682373   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:03:38.689635   54219 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:03:38.696927   54219 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:03:38.696978   54219 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:03:38.704097   54219 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:03:38.743640   54219 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:03:38.743913   54219 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:03:38.814950   54219 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:03:38.815010   54219 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:03:38.815042   54219 kubeadm.go:319] OS: Linux
	I1212 20:03:38.815098   54219 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:03:38.815149   54219 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:03:38.815192   54219 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:03:38.815236   54219 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:03:38.815280   54219 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:03:38.815324   54219 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:03:38.815365   54219 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:03:38.815409   54219 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:03:38.815451   54219 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:03:38.887100   54219 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:03:38.887197   54219 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:03:38.887281   54219 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:03:38.896370   54219 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:03:38.901736   54219 out.go:252]   - Generating certificates and keys ...
	I1212 20:03:38.901817   54219 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:03:38.901877   54219 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:03:38.901950   54219 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:03:38.902007   54219 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:03:38.902071   54219 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:03:38.902127   54219 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:03:38.902186   54219 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:03:38.902243   54219 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:03:38.902321   54219 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:03:38.902389   54219 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:03:38.902423   54219 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:03:38.902476   54219 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:03:39.125808   54219 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:03:39.338381   54219 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:03:39.401460   54219 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:03:39.625424   54219 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:03:39.783055   54219 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:03:39.783603   54219 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:03:39.786147   54219 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:03:39.789268   54219 out.go:252]   - Booting up control plane ...
	I1212 20:03:39.789370   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:03:39.789458   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:03:39.790103   54219 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:03:39.810111   54219 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:03:39.810207   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:03:39.818331   54219 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:03:39.818818   54219 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:03:39.818950   54219 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:03:39.956538   54219 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:03:39.956645   54219 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:07:39.951298   54219 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001147362s
	I1212 20:07:39.951324   54219 kubeadm.go:319] 
	I1212 20:07:39.951381   54219 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:07:39.951413   54219 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:07:39.951517   54219 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:07:39.951522   54219 kubeadm.go:319] 
	I1212 20:07:39.951625   54219 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:07:39.951656   54219 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:07:39.951686   54219 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:07:39.951689   54219 kubeadm.go:319] 
	I1212 20:07:39.955566   54219 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:07:39.956028   54219 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:07:39.956162   54219 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:07:39.956426   54219 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:07:39.956433   54219 kubeadm.go:319] 
	I1212 20:07:39.956501   54219 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:07:39.956558   54219 kubeadm.go:403] duration metric: took 12m7.846093292s to StartCluster
	I1212 20:07:39.956588   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:07:39.956652   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:07:39.984872   54219 cri.go:89] found id: ""
	I1212 20:07:39.984887   54219 logs.go:282] 0 containers: []
	W1212 20:07:39.984894   54219 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:07:39.984900   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:07:39.984958   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:07:40.008408   54219 cri.go:89] found id: ""
	I1212 20:07:40.008426   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.008433   54219 logs.go:284] No container was found matching "etcd"
	I1212 20:07:40.008439   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:07:40.008502   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:07:40.051885   54219 cri.go:89] found id: ""
	I1212 20:07:40.051899   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.051906   54219 logs.go:284] No container was found matching "coredns"
	I1212 20:07:40.051911   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:07:40.051971   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:07:40.078448   54219 cri.go:89] found id: ""
	I1212 20:07:40.078462   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.078469   54219 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:07:40.078473   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:07:40.078533   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:07:40.105530   54219 cri.go:89] found id: ""
	I1212 20:07:40.105555   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.105562   54219 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:07:40.105568   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:07:40.105632   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:07:40.134868   54219 cri.go:89] found id: ""
	I1212 20:07:40.134884   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.134911   54219 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:07:40.134917   54219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:07:40.134977   54219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:07:40.160769   54219 cri.go:89] found id: ""
	I1212 20:07:40.160782   54219 logs.go:282] 0 containers: []
	W1212 20:07:40.160789   54219 logs.go:284] No container was found matching "kindnet"
	I1212 20:07:40.160798   54219 logs.go:123] Gathering logs for container status ...
	I1212 20:07:40.160808   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:07:40.187973   54219 logs.go:123] Gathering logs for kubelet ...
	I1212 20:07:40.187990   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:07:40.250924   54219 logs.go:123] Gathering logs for dmesg ...
	I1212 20:07:40.250942   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:07:40.266149   54219 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:07:40.266165   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:07:40.328697   54219 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:07:40.319521   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.320506   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322091   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.322633   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:07:40.324257   20894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:07:40.328707   54219 logs.go:123] Gathering logs for containerd ...
	I1212 20:07:40.328717   54219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 20:07:40.395302   54219 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:07:40.395340   54219 out.go:285] * 
	W1212 20:07:40.395406   54219 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.395426   54219 out.go:285] * 
	W1212 20:07:40.397542   54219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:07:40.403271   54219 out.go:203] 
	W1212 20:07:40.407023   54219 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001147362s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:07:40.407080   54219 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:07:40.407103   54219 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:07:40.410913   54219 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409212463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409233845Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409270693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409288186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409297991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409313604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409322646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409334633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409357730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409389073Z" level=info msg="Connect containerd service"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409646705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.410157440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.430913489Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431535088Z" level=info msg="Start recovering state"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431784515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431871117Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469097271Z" level=info msg="Start event monitor"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469264239Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469333685Z" level=info msg="Start streaming server"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469389199Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469443014Z" level=info msg="runtime interface starting up..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469502196Z" level=info msg="starting plugins..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469562690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:55:30 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.471989321Z" level=info msg="containerd successfully booted in 0.083546s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:09:57.600372   22525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:57.601028   22525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:57.602754   22525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:57.603293   22525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:09:57.604884   22525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:09:57 up 52 min,  0 user,  load average: 0.18, 0.29, 0.38
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:09:54 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:09:55 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 12 20:09:55 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:55 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:55 functional-384006 kubelet[22409]: E1212 20:09:55.244845   22409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:09:55 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:09:55 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:09:55 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 12 20:09:55 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:55 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:55 functional-384006 kubelet[22415]: E1212 20:09:55.998833   22415 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:09:56 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:09:56 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:09:56 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 12 20:09:56 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:56 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:56 functional-384006 kubelet[22434]: E1212 20:09:56.731622   22434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:09:56 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:09:56 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:09:57 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 503.
	Dec 12 20:09:57 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:57 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:09:57 functional-384006 kubelet[22504]: E1212 20:09:57.513378   22504 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:09:57 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:09:57 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (383.719355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:07:58.954170    4120 retry.go:31] will retry after 3.862353169s: Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:08:12.817094    4120 retry.go:31] will retry after 3.665751447s: Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:08:26.484173    4120 retry.go:31] will retry after 4.873346196s: Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:08:41.358074    4120 retry.go:31] will retry after 5.258584812s: Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:08:56.618252    4120 retry.go:31] will retry after 13.014639888s: Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:09:19.634106    4120 retry.go:31] will retry after 26.25584796s: Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 20:09:51.906604    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (300.714387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (292.274133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-384006 image load --daemon kicbase/echo-server:functional-384006 --alsologtostderr                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls                                                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image save kicbase/echo-server:functional-384006 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image rm kicbase/echo-server:functional-384006 --alsologtostderr                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls                                                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls                                                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image save --daemon kicbase/echo-server:functional-384006 --alsologtostderr                                                                   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /etc/test/nested/copy/4120/hosts                                                                                                 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /etc/ssl/certs/4120.pem                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /usr/share/ca-certificates/4120.pem                                                                                              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /etc/ssl/certs/41202.pem                                                                                                         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /usr/share/ca-certificates/41202.pem                                                                                             │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls --format short --alsologtostderr                                                                                                     │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ update-context │ functional-384006 update-context --alsologtostderr -v=2                                                                                                         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh            │ functional-384006 ssh pgrep buildkitd                                                                                                                           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ image          │ functional-384006 image build -t localhost/my-image:functional-384006 testdata/build --alsologtostderr                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls                                                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls --format yaml --alsologtostderr                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls --format json --alsologtostderr                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ image          │ functional-384006 image ls --format table --alsologtostderr                                                                                                     │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ update-context │ functional-384006 update-context --alsologtostderr -v=2                                                                                                         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ update-context │ functional-384006 update-context --alsologtostderr -v=2                                                                                                         │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:13.630130   71633 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:13.630368   71633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.630398   71633 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:13.630418   71633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.630729   71633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:10:13.631190   71633 out.go:368] Setting JSON to false
	I1212 20:10:13.632211   71633 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3163,"bootTime":1765567051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:10:13.632321   71633 start.go:143] virtualization:  
	I1212 20:10:13.635605   71633 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:10:13.639456   71633 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:13.639530   71633 notify.go:221] Checking for updates...
	I1212 20:10:13.645472   71633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:13.648307   71633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:10:13.651207   71633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:10:13.654091   71633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:10:13.657072   71633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:13.660514   71633 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:13.661147   71633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:13.686958   71633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:10:13.687088   71633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.748518   71633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.739495162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.748622   71633 docker.go:319] overlay module found
	I1212 20:10:13.753675   71633 out.go:179] * Using the docker driver based on existing profile
	I1212 20:10:13.756658   71633 start.go:309] selected driver: docker
	I1212 20:10:13.756696   71633 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.756792   71633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:13.756901   71633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.815565   71633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.805182249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.816056   71633 cni.go:84] Creating CNI manager for ""
	I1212 20:10:13.816121   71633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:10:13.816169   71633 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.819358   71633 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 20:10:20 functional-384006 containerd[9654]: time="2025-12-12T20:10:20.145021437Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:20 functional-384006 containerd[9654]: time="2025-12-12T20:10:20.145514707Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-384006\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.224536266Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-384006\""
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.227492099Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-384006\""
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.229728419Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.239495231Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-384006\" returns successfully"
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.493660562Z" level=info msg="No images store for sha256:192d3df68a495a84169863ed9c5ddff87f1fc0ceeb514b347eea5f531ad2645c"
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.495977184Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-384006\""
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.508767823Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:21 functional-384006 containerd[9654]: time="2025-12-12T20:10:21.509168148Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-384006\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.296059237Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-384006\""
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.298429215Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-384006\""
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.300609906Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.309349998Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-384006\" returns successfully"
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.981808097Z" level=info msg="No images store for sha256:271852f1e04aaa38c63e1a935492b47c188eeefbeb17a86670861298dfa20c9c"
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.984071256Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-384006\""
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.991205771Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:22 functional-384006 containerd[9654]: time="2025-12-12T20:10:22.991547842Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-384006\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.283696664Z" level=info msg="connecting to shim sbr955qt8ksna7hh0qczj36hu" address="unix:///run/containerd/s/3ce57a15a5631d16daf5b8fee6a4133ddc8d0b58e23fcfbab9d4bdb9976db84b" namespace=k8s.io protocol=ttrpc version=3
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.356547045Z" level=info msg="shim disconnected" id=sbr955qt8ksna7hh0qczj36hu namespace=k8s.io
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.357362120Z" level=info msg="cleaning up after shim disconnected" id=sbr955qt8ksna7hh0qczj36hu namespace=k8s.io
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.357402234Z" level=info msg="cleaning up dead shim" id=sbr955qt8ksna7hh0qczj36hu namespace=k8s.io
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.637786469Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-384006\""
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.646288824Z" level=info msg="ImageCreate event name:\"sha256:06f44feca4d5243f8148827aa41f020f045feda1157d8654198a92aa82e6a6d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:10:29 functional-384006 containerd[9654]: time="2025-12-12T20:10:29.646634333Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-384006\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:11:50.486490   25050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:11:50.487122   25050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:11:50.488970   25050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:11:50.489502   25050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:11:50.491081   25050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:11:50 up 54 min,  0 user,  load average: 0.42, 0.39, 0.40
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:11:46 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:11:47 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 12 20:11:47 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:47 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:47 functional-384006 kubelet[24921]: E1212 20:11:47.740431   24921 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:11:47 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:11:47 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:11:48 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 12 20:11:48 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:48 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:48 functional-384006 kubelet[24927]: E1212 20:11:48.499105   24927 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:11:48 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:11:48 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:11:49 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 12 20:11:49 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:49 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:49 functional-384006 kubelet[24932]: E1212 20:11:49.251112   24932 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:11:49 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:11:49 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:11:49 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 12 20:11:49 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:49 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:11:50 functional-384006 kubelet[24967]: E1212 20:11:50.016214   24967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:11:50 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:11:50 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (319.3889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-384006 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-384006 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (65.039984ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-384006 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	        "Created": "2025-12-12T19:40:49.413785329Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T19:40:49.485581335Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
	        "Name": "/functional-384006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-384006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
	                "LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
	                "UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
	                "WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384006",
	                "Source": "/var/lib/docker/volumes/functional-384006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384006",
	                "name.minikube.sigs.k8s.io": "functional-384006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
	            "SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-384006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:63:42:b7:50:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
	                    "EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384006",
	                        "b1a98cbc4698"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 2 (300.732406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-384006 service hello-node --url                                                                                                          │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1              │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh -- ls -la /mount-9p                                                                                                           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh cat /mount-9p/test-1765570203865063779                                                                                        │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh sudo umount -f /mount-9p                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo34640341/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh -- ls -la /mount-9p                                                                                                           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh sudo umount -f /mount-9p                                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount1 --alsologtostderr -v=1                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount1                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount2 --alsologtostderr -v=1                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ mount     │ -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount3 --alsologtostderr -v=1                │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ ssh       │ functional-384006 ssh findmnt -T /mount1                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh findmnt -T /mount2                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ ssh       │ functional-384006 ssh findmnt -T /mount3                                                                                                            │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │ 12 Dec 25 20:10 UTC │
	│ mount     │ -p functional-384006 --kill=true                                                                                                                    │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ start     │ -p functional-384006 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-384006 --alsologtostderr -v=1                                                                                      │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 20:10 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:10:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:10:13.630130   71633 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:13.630368   71633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.630398   71633 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:13.630418   71633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.630729   71633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:10:13.631190   71633 out.go:368] Setting JSON to false
	I1212 20:10:13.632211   71633 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3163,"bootTime":1765567051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:10:13.632321   71633 start.go:143] virtualization:  
	I1212 20:10:13.635605   71633 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:10:13.639456   71633 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:13.639530   71633 notify.go:221] Checking for updates...
	I1212 20:10:13.645472   71633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:13.648307   71633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:10:13.651207   71633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:10:13.654091   71633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:10:13.657072   71633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:13.660514   71633 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:13.661147   71633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:13.686958   71633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:10:13.687088   71633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.748518   71633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.739495162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.748622   71633 docker.go:319] overlay module found
	I1212 20:10:13.753675   71633 out.go:179] * Using the docker driver based on existing profile
	I1212 20:10:13.756658   71633 start.go:309] selected driver: docker
	I1212 20:10:13.756696   71633 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.756792   71633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:13.756901   71633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.815565   71633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.805182249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.816056   71633 cni.go:84] Creating CNI manager for ""
	I1212 20:10:13.816121   71633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:10:13.816169   71633 start.go:353] cluster config:
	{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.819358   71633 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409212463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409233845Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409270693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409288186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409297991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409313604Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409322646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409334633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409357730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409389073Z" level=info msg="Connect containerd service"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.409646705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.410157440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.430913489Z" level=info msg="Start subscribing containerd event"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431535088Z" level=info msg="Start recovering state"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431784515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.431871117Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469097271Z" level=info msg="Start event monitor"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469264239Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469333685Z" level=info msg="Start streaming server"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469389199Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469443014Z" level=info msg="runtime interface starting up..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469502196Z" level=info msg="starting plugins..."
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.469562690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 19:55:30 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 19:55:30 functional-384006 containerd[9654]: time="2025-12-12T19:55:30.471989321Z" level=info msg="containerd successfully booted in 0.083546s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:10:16.544751   23521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:16.545628   23521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:16.547344   23521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:16.547900   23521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:10:16.549408   23521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 20:10:16 up 52 min,  0 user,  load average: 0.54, 0.37, 0.40
	Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:10:13 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:13 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 525.
	Dec 12 20:10:13 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:13 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:14 functional-384006 kubelet[23276]: E1212 20:10:14.015565   23276 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 526.
	Dec 12 20:10:14 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:14 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:14 functional-384006 kubelet[23305]: E1212 20:10:14.761812   23305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:14 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:15 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 527.
	Dec 12 20:10:15 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:15 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:15 functional-384006 kubelet[23411]: E1212 20:10:15.505142   23411 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:15 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:15 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:10:16 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 528.
	Dec 12 20:10:16 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:16 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:10:16 functional-384006 kubelet[23446]: E1212 20:10:16.251208   23446 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:10:16 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:10:16 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 2 (335.266761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1212 20:07:48.326410   67225 out.go:360] Setting OutFile to fd 1 ...
I1212 20:07:48.338159   67225 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:07:48.338171   67225 out.go:374] Setting ErrFile to fd 2...
I1212 20:07:48.338178   67225 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:07:48.338577   67225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:07:48.338932   67225 mustload.go:66] Loading cluster: functional-384006
I1212 20:07:48.339807   67225 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:07:48.341477   67225 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:07:48.380371   67225 host.go:66] Checking if "functional-384006" exists ...
I1212 20:07:48.380687   67225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 20:07:48.560687   67225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:07:48.526816963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 20:07:48.560815   67225 api_server.go:166] Checking apiserver status ...
I1212 20:07:48.560876   67225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 20:07:48.560920   67225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:07:48.589982   67225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
W1212 20:07:48.724679   67225 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 20:07:48.728079   67225 out.go:179] * The control-plane node functional-384006 apiserver is not running: (state=Stopped)
I1212 20:07:48.731099   67225 out.go:179]   To start a cluster, run: "minikube start -p functional-384006"

                                                
                                                
stdout: * The control-plane node functional-384006 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-384006"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 67224: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-384006 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-384006 apply -f testdata/testsvc.yaml: exit status 1 (96.879598ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-384006 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (127.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.110.125.27": Temporary Error: Get "http://10.110.125.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-384006 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-384006 get svc nginx-svc: exit status 1 (72.860186ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-384006 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (127.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-384006 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-384006 create deployment hello-node --image kicbase/echo-server: exit status 1 (56.55616ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-384006 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 service list: exit status 103 (260.704939ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384006 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384006"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-384006 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-384006 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384006\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 service list -o json: exit status 103 (263.396779ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384006 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384006"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-384006 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 service --namespace=default --https --url hello-node: exit status 103 (259.435239ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384006 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384006"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-384006 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 service hello-node --url --format={{.IP}}: exit status 103 (255.443907ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384006 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384006"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-384006 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-384006 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384006\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 service hello-node --url: exit status 103 (295.179929ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384006 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384006"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-384006 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-384006 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-384006"
functional_test.go:1579: failed to parse "* The control-plane node functional-384006 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384006\"": parse "* The control-plane node functional-384006 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384006\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765570203865063779" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765570203865063779" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765570203865063779" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001/test-1765570203865063779
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.343485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:10:04.201671    4120 retry.go:31] will retry after 288.564537ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 20:10 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 20:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 20:10 test-1765570203865063779
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh cat /mount-9p/test-1765570203865063779
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-384006 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-384006 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (53.788262ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-384006 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (275.612662ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=45715)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 12 20:10 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 12 20:10 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 12 20:10 test-1765570203865063779
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-384006 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:45715
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001:/mount-9p --alsologtostderr -v=1] stderr:
I1212 20:10:03.925636   69709 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:03.925829   69709 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:03.925840   69709 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:03.925846   69709 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:03.926091   69709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:03.926331   69709 mustload.go:66] Loading cluster: functional-384006
I1212 20:10:03.926683   69709 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:03.927212   69709 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:03.944891   69709 host.go:66] Checking if "functional-384006" exists ...
I1212 20:10:03.945220   69709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 20:10:04.040848   69709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:04.029465077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 20:10:04.040997   69709 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 20:10:04.067199   69709 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001 into VM as /mount-9p ...
I1212 20:10:04.070254   69709 out.go:179]   - Mount type:   9p
I1212 20:10:04.073189   69709 out.go:179]   - User ID:      docker
I1212 20:10:04.076098   69709 out.go:179]   - Group ID:     docker
I1212 20:10:04.079079   69709 out.go:179]   - Version:      9p2000.L
I1212 20:10:04.082062   69709 out.go:179]   - Message Size: 262144
I1212 20:10:04.085042   69709 out.go:179]   - Options:      map[]
I1212 20:10:04.087815   69709 out.go:179]   - Bind Address: 192.168.49.1:45715
I1212 20:10:04.090747   69709 out.go:179] * Userspace file server: 
I1212 20:10:04.091054   69709 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 20:10:04.091150   69709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:04.115083   69709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 20:10:04.223101   69709 mount.go:180] unmount for /mount-9p ran successfully
I1212 20:10:04.223133   69709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1212 20:10:04.233563   69709 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=45715,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1212 20:10:04.245259   69709 main.go:127] stdlog: ufs.go:141 connected
I1212 20:10:04.245434   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tversion tag 65535 msize 262144 version '9P2000.L'
I1212 20:10:04.245470   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rversion tag 65535 msize 262144 version '9P2000'
I1212 20:10:04.245704   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1212 20:10:04.245764   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rattach tag 0 aqid (3b5bcd 142f50d6 'd')
I1212 20:10:04.247946   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 0
I1212 20:10:04.248025   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcd 142f50d6 'd') m d775 at 0 mt 1765570203 l 4096 t 0 d 0 ext )
I1212 20:10:04.249333   69709 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/.mount-process: {Name:mk4dcfdc9606e5a05bfb7e477d6ab5c3f36d0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 20:10:04.249519   69709 mount.go:105] mount successful: ""
I1212 20:10:04.252882   69709 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1899271428/001 to /mount-9p
I1212 20:10:04.255664   69709 out.go:203] 
I1212 20:10:04.258541   69709 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1212 20:10:05.036668   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 0
I1212 20:10:05.036748   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcd 142f50d6 'd') m d775 at 0 mt 1765570203 l 4096 t 0 d 0 ext )
I1212 20:10:05.037129   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 1 
I1212 20:10:05.037165   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 
I1212 20:10:05.037287   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Topen tag 0 fid 1 mode 0
I1212 20:10:05.037338   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Ropen tag 0 qid (3b5bcd 142f50d6 'd') iounit 0
I1212 20:10:05.037473   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 0
I1212 20:10:05.037509   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcd 142f50d6 'd') m d775 at 0 mt 1765570203 l 4096 t 0 d 0 ext )
I1212 20:10:05.037657   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 0 count 262120
I1212 20:10:05.037791   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 258
I1212 20:10:05.037964   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 258 count 261862
I1212 20:10:05.038004   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.038799   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:10:05.038825   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.038985   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 20:10:05.039022   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bce 142f50d6 '') 
I1212 20:10:05.039162   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.039202   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bce 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.039340   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.039383   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bce 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.039526   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.039565   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.039712   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 2 0:'test-1765570203865063779' 
I1212 20:10:05.039781   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bd0 142f50d6 '') 
I1212 20:10:05.040150   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.040188   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('test-1765570203865063779' 'jenkins' 'jenkins' '' q (3b5bd0 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.040329   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.040367   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('test-1765570203865063779' 'jenkins' 'jenkins' '' q (3b5bd0 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.040494   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.040518   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.040666   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 20:10:05.040704   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bcf 142f50d6 '') 
I1212 20:10:05.040855   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.040885   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bcf 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.041005   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.041048   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bcf 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.041157   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.041178   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.041308   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:10:05.041338   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.041480   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 1
I1212 20:10:05.041518   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.308034   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 1 0:'test-1765570203865063779' 
I1212 20:10:05.308110   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bd0 142f50d6 '') 
I1212 20:10:05.308296   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 1
I1212 20:10:05.308341   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('test-1765570203865063779' 'jenkins' 'jenkins' '' q (3b5bd0 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.308490   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 1 newfid 2 
I1212 20:10:05.308519   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 
I1212 20:10:05.308660   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Topen tag 0 fid 2 mode 0
I1212 20:10:05.308713   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Ropen tag 0 qid (3b5bd0 142f50d6 '') iounit 0
I1212 20:10:05.308869   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 1
I1212 20:10:05.308918   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('test-1765570203865063779' 'jenkins' 'jenkins' '' q (3b5bd0 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.309083   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 2 offset 0 count 262120
I1212 20:10:05.309132   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 24
I1212 20:10:05.309261   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 2 offset 24 count 262120
I1212 20:10:05.309291   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.309457   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 2 offset 24 count 262120
I1212 20:10:05.309524   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.309772   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.309822   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.309977   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 1
I1212 20:10:05.310003   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.642732   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 0
I1212 20:10:05.642803   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcd 142f50d6 'd') m d775 at 0 mt 1765570203 l 4096 t 0 d 0 ext )
I1212 20:10:05.643156   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 1 
I1212 20:10:05.643193   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 
I1212 20:10:05.643328   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Topen tag 0 fid 1 mode 0
I1212 20:10:05.643375   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Ropen tag 0 qid (3b5bcd 142f50d6 'd') iounit 0
I1212 20:10:05.643511   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 0
I1212 20:10:05.643547   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcd 142f50d6 'd') m d775 at 0 mt 1765570203 l 4096 t 0 d 0 ext )
I1212 20:10:05.643699   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 0 count 262120
I1212 20:10:05.643798   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 258
I1212 20:10:05.643935   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 258 count 261862
I1212 20:10:05.643963   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.644100   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:10:05.644125   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.644253   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 20:10:05.644283   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bce 142f50d6 '') 
I1212 20:10:05.644404   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.644438   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bce 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.644556   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.644594   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bce 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.644733   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.644757   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.644877   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 2 0:'test-1765570203865063779' 
I1212 20:10:05.644913   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bd0 142f50d6 '') 
I1212 20:10:05.645036   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.645064   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('test-1765570203865063779' 'jenkins' 'jenkins' '' q (3b5bd0 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.645179   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.645214   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('test-1765570203865063779' 'jenkins' 'jenkins' '' q (3b5bd0 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.645339   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.645360   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.645486   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 20:10:05.645514   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rwalk tag 0 (3b5bcf 142f50d6 '') 
I1212 20:10:05.645637   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.645677   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bcf 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.645800   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tstat tag 0 fid 2
I1212 20:10:05.645829   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bcf 142f50d6 '') m 644 at 0 mt 1765570203 l 24 t 0 d 0 ext )
I1212 20:10:05.645953   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 2
I1212 20:10:05.645971   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.646114   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:10:05.646136   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rread tag 0 count 0
I1212 20:10:05.646287   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 1
I1212 20:10:05.646311   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.647495   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1212 20:10:05.647562   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rerror tag 0 ename 'file not found' ecode 0
I1212 20:10:05.934802   69709 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:34180 Tclunk tag 0 fid 0
I1212 20:10:05.934990   69709 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:34180 Rclunk tag 0
I1212 20:10:05.940960   69709 main.go:127] stdlog: ufs.go:147 disconnected
I1212 20:10:05.963322   69709 out.go:179] * Unmounting /mount-9p ...
I1212 20:10:05.966229   69709 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 20:10:05.973341   69709 mount.go:180] unmount for /mount-9p ran successfully
I1212 20:10:05.973459   69709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/.mount-process: {Name:mk4dcfdc9606e5a05bfb7e477d6ab5c3f36d0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 20:10:05.976590   69709 out.go:203] 
W1212 20:10:05.979410   69709 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1212 20:10:05.982215   69709 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (804.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-016181 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-016181 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.202426974s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-016181
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-016181: (1.458069079s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-016181 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-016181 status --format={{.Host}}: exit status 7 (127.603045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-016181 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-016181 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m36.071276316s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-016181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-016181" primary control-plane node in "kubernetes-upgrade-016181" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:39:59.349446  202985 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:39:59.349554  202985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:39:59.349565  202985 out.go:374] Setting ErrFile to fd 2...
	I1212 20:39:59.349570  202985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:39:59.349810  202985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:39:59.350144  202985 out.go:368] Setting JSON to false
	I1212 20:39:59.350960  202985 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4949,"bootTime":1765567051,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:39:59.351023  202985 start.go:143] virtualization:  
	I1212 20:39:59.354368  202985 out.go:179] * [kubernetes-upgrade-016181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:39:59.358094  202985 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:39:59.358176  202985 notify.go:221] Checking for updates...
	I1212 20:39:59.364645  202985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:39:59.367592  202985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:39:59.370415  202985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:39:59.373369  202985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:39:59.376234  202985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:39:59.379457  202985 config.go:182] Loaded profile config "kubernetes-upgrade-016181": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1212 20:39:59.380206  202985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:39:59.417997  202985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:39:59.418114  202985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:39:59.501445  202985 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 20:39:59.491420445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:39:59.501547  202985 docker.go:319] overlay module found
	I1212 20:39:59.504555  202985 out.go:179] * Using the docker driver based on existing profile
	I1212 20:39:59.507424  202985 start.go:309] selected driver: docker
	I1212 20:39:59.507445  202985 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-016181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-016181 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:39:59.507556  202985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:39:59.508415  202985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:39:59.599443  202985 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 20:39:59.589139977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:39:59.599792  202985 cni.go:84] Creating CNI manager for ""
	I1212 20:39:59.599947  202985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:39:59.600000  202985 start.go:353] cluster config:
	{Name:kubernetes-upgrade-016181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-016181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:39:59.603228  202985 out.go:179] * Starting "kubernetes-upgrade-016181" primary control-plane node in "kubernetes-upgrade-016181" cluster
	I1212 20:39:59.606113  202985 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 20:39:59.609045  202985 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:39:59.612140  202985 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 20:39:59.612189  202985 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 20:39:59.612204  202985 cache.go:65] Caching tarball of preloaded images
	I1212 20:39:59.612305  202985 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 20:39:59.612316  202985 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 20:39:59.612419  202985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/config.json ...
	I1212 20:39:59.612630  202985 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:39:59.639795  202985 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:39:59.639815  202985 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:39:59.639829  202985 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:39:59.639880  202985 start.go:360] acquireMachinesLock for kubernetes-upgrade-016181: {Name:mk04ebc9cb8a35274e05a6464df93c804e9a4d73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:39:59.639936  202985 start.go:364] duration metric: took 34.526µs to acquireMachinesLock for "kubernetes-upgrade-016181"
	I1212 20:39:59.639955  202985 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:39:59.639982  202985 fix.go:54] fixHost starting: 
	I1212 20:39:59.640260  202985 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-016181 --format={{.State.Status}}
	I1212 20:39:59.672170  202985 fix.go:112] recreateIfNeeded on kubernetes-upgrade-016181: state=Stopped err=<nil>
	W1212 20:39:59.672198  202985 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:39:59.675395  202985 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-016181" ...
	I1212 20:39:59.675476  202985 cli_runner.go:164] Run: docker start kubernetes-upgrade-016181
	I1212 20:39:59.979649  202985 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-016181 --format={{.State.Status}}
	I1212 20:40:00.017021  202985 kic.go:430] container "kubernetes-upgrade-016181" state is running.
	I1212 20:40:00.017525  202985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-016181
	I1212 20:40:00.095782  202985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/config.json ...
	I1212 20:40:00.096082  202985 machine.go:94] provisionDockerMachine start ...
	I1212 20:40:00.096177  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:00.207987  202985 main.go:143] libmachine: Using SSH client type: native
	I1212 20:40:00.229321  202985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1212 20:40:00.229353  202985 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:40:00.241608  202985 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53008->127.0.0.1:33018: read: connection reset by peer
	I1212 20:40:03.418400  202985 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-016181
	
	I1212 20:40:03.418506  202985 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-016181"
	I1212 20:40:03.418608  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:03.446054  202985 main.go:143] libmachine: Using SSH client type: native
	I1212 20:40:03.446425  202985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1212 20:40:03.446438  202985 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-016181 && echo "kubernetes-upgrade-016181" | sudo tee /etc/hostname
	I1212 20:40:03.628292  202985 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-016181
	
	I1212 20:40:03.628455  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:03.653209  202985 main.go:143] libmachine: Using SSH client type: native
	I1212 20:40:03.653534  202985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1212 20:40:03.653552  202985 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-016181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-016181/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-016181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:40:03.812873  202985 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:40:03.812950  202985 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 20:40:03.813022  202985 ubuntu.go:190] setting up certificates
	I1212 20:40:03.813077  202985 provision.go:84] configureAuth start
	I1212 20:40:03.813163  202985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-016181
	I1212 20:40:03.841475  202985 provision.go:143] copyHostCerts
	I1212 20:40:03.841558  202985 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 20:40:03.841568  202985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 20:40:03.841645  202985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 20:40:03.841740  202985 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 20:40:03.841745  202985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 20:40:03.841771  202985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 20:40:03.841821  202985 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 20:40:03.841826  202985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 20:40:03.841849  202985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 20:40:03.841898  202985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-016181 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-016181 localhost minikube]
	I1212 20:40:04.016600  202985 provision.go:177] copyRemoteCerts
	I1212 20:40:04.016714  202985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:40:04.016791  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:04.053458  202985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/kubernetes-upgrade-016181/id_rsa Username:docker}
	I1212 20:40:04.168499  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:40:04.189506  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 20:40:04.209626  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:40:04.229665  202985 provision.go:87] duration metric: took 416.540558ms to configureAuth
	I1212 20:40:04.229738  202985 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:40:04.229959  202985 config.go:182] Loaded profile config "kubernetes-upgrade-016181": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:40:04.229991  202985 machine.go:97] duration metric: took 4.133898708s to provisionDockerMachine
	I1212 20:40:04.230013  202985 start.go:293] postStartSetup for "kubernetes-upgrade-016181" (driver="docker")
	I1212 20:40:04.230036  202985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:40:04.230122  202985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:40:04.230198  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:04.253495  202985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/kubernetes-upgrade-016181/id_rsa Username:docker}
	I1212 20:40:04.361199  202985 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:40:04.365409  202985 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:40:04.365436  202985 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:40:04.365447  202985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 20:40:04.365498  202985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 20:40:04.365575  202985 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 20:40:04.365684  202985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:40:04.378107  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 20:40:04.398720  202985 start.go:296] duration metric: took 168.683028ms for postStartSetup
	I1212 20:40:04.398890  202985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:40:04.398967  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:04.422766  202985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/kubernetes-upgrade-016181/id_rsa Username:docker}
	I1212 20:40:04.534626  202985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:40:04.540632  202985 fix.go:56] duration metric: took 4.900665157s for fixHost
	I1212 20:40:04.540657  202985 start.go:83] releasing machines lock for "kubernetes-upgrade-016181", held for 4.900712113s
	I1212 20:40:04.540731  202985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-016181
	I1212 20:40:04.560808  202985 ssh_runner.go:195] Run: cat /version.json
	I1212 20:40:04.560860  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:04.561097  202985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:40:04.561153  202985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-016181
	I1212 20:40:04.585749  202985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/kubernetes-upgrade-016181/id_rsa Username:docker}
	I1212 20:40:04.614653  202985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/kubernetes-upgrade-016181/id_rsa Username:docker}
	I1212 20:40:04.703720  202985 ssh_runner.go:195] Run: systemctl --version
	I1212 20:40:04.825861  202985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:40:04.831148  202985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:40:04.831273  202985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:40:04.843291  202985 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:40:04.843363  202985 start.go:496] detecting cgroup driver to use...
	I1212 20:40:04.843407  202985 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:40:04.843490  202985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 20:40:04.861901  202985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:40:04.878712  202985 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:40:04.878892  202985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:40:04.896805  202985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:40:04.911324  202985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:40:05.092029  202985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:40:05.265257  202985 docker.go:234] disabling docker service ...
	I1212 20:40:05.265322  202985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:40:05.284865  202985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:40:05.301882  202985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:40:05.461344  202985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:40:05.605806  202985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:40:05.621082  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:40:05.637246  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:40:05.647372  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:40:05.657480  202985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 20:40:05.657624  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 20:40:05.667277  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:40:05.677042  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:40:05.686589  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:40:05.696400  202985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:40:05.705570  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:40:05.715358  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:40:05.725997  202985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:40:05.735748  202985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:40:05.745091  202985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:40:05.753541  202985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:40:05.898756  202985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:40:06.128655  202985 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 20:40:06.128804  202985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 20:40:06.133136  202985 start.go:564] Will wait 60s for crictl version
	I1212 20:40:06.133203  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:06.138476  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:40:06.166549  202985 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 20:40:06.166618  202985 ssh_runner.go:195] Run: containerd --version
	I1212 20:40:06.187713  202985 ssh_runner.go:195] Run: containerd --version
	I1212 20:40:06.214549  202985 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 20:40:06.217527  202985 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-016181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:40:06.237935  202985 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 20:40:06.242502  202985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:40:06.253082  202985 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-016181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-016181 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:40:06.253205  202985 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 20:40:06.253269  202985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:40:06.290258  202985 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 20:40:06.290328  202985 ssh_runner.go:195] Run: which lz4
	I1212 20:40:06.294690  202985 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 20:40:06.299391  202985 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:40:06.299422  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1212 20:40:08.020822  202985 containerd.go:563] duration metric: took 1.726169755s to copy over tarball
	I1212 20:40:08.020952  202985 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:40:10.373451  202985 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.352450703s)
	I1212 20:40:10.373552  202985 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1212 20:40:10.373663  202985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:40:10.418257  202985 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 20:40:10.418279  202985 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 20:40:10.418341  202985 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:40:10.418524  202985 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:10.418622  202985 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:10.418707  202985 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:10.418791  202985 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:10.418858  202985 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 20:40:10.418927  202985 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:10.419010  202985 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:10.421895  202985 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 20:40:10.422398  202985 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:10.422628  202985 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:10.422819  202985 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:10.423032  202985 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:10.423224  202985 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:40:10.423546  202985 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:10.424115  202985 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:10.769941  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1212 20:40:10.770067  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:10.772332  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1212 20:40:10.772441  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1212 20:40:10.813637  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1212 20:40:10.813757  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:10.846079  202985 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1212 20:40:10.846537  202985 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:10.846188  202985 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1212 20:40:10.846610  202985 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1212 20:40:10.846676  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:10.846492  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1212 20:40:10.846792  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:10.846867  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:10.848983  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1212 20:40:10.849076  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:10.867284  202985 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1212 20:40:10.867354  202985 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:10.867422  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:10.888909  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:10.888986  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:40:10.889063  202985 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1212 20:40:10.889110  202985 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:10.889169  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:10.930159  202985 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1212 20:40:10.930201  202985 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:10.930270  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:10.930361  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:10.931952  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1212 20:40:10.932028  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:10.939132  202985 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1212 20:40:10.939238  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:11.000386  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:40:11.000494  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:11.000576  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:11.082002  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:11.082125  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:11.082252  202985 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1212 20:40:11.082279  202985 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:11.082307  202985 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1212 20:40:11.082341  202985 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:11.082310  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:11.082374  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:11.159031  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 20:40:11.159034  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 20:40:11.159154  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:11.205902  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:11.205909  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 20:40:11.206019  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:11.206049  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:11.280231  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 20:40:11.337623  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1212 20:40:11.337712  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 20:40:11.337784  202985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 20:40:11.392847  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:11.395759  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 20:40:11.395901  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 20:40:11.395973  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:11.487207  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1212 20:40:11.487284  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 20:40:11.487317  202985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:40:11.487222  202985 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 20:40:11.487392  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1212 20:40:11.507527  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 20:40:11.517365  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 20:40:11.550576  202985 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 20:40:11.550616  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1212 20:40:11.550794  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 20:40:11.594526  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 20:40:11.604958  202985 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 20:40:11.605073  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1212 20:40:11.717419  202985 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1212 20:40:11.717644  202985 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1212 20:40:11.717733  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:40:11.944556  202985 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:40:11.944679  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1212 20:40:11.944834  202985 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1212 20:40:11.944903  202985 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:40:11.944982  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:13.570079  202985 ssh_runner.go:235] Completed: which crictl: (1.625046939s)
	I1212 20:40:13.570147  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:40:13.570225  202985 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.625512688s)
	I1212 20:40:13.757506  202985 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 20:40:13.757615  202985 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:40:13.764913  202985 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 20:40:13.764959  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1212 20:40:13.902589  202985 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:40:13.902750  202985 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1212 20:40:14.428974  202985 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 20:40:14.429106  202985 cache_images.go:94] duration metric: took 4.010810959s to LoadCachedImages
	W1212 20:40:14.429200  202985 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22112-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1: no such file or directory
	I1212 20:40:14.429365  202985 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 20:40:14.429496  202985 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-016181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-016181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:40:14.429597  202985 ssh_runner.go:195] Run: sudo crictl info
	I1212 20:40:14.470264  202985 cni.go:84] Creating CNI manager for ""
	I1212 20:40:14.470285  202985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:40:14.470300  202985 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:40:14.470326  202985 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-016181 NodeName:kubernetes-upgrade-016181 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:40:14.470458  202985 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-016181"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:40:14.470558  202985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:40:14.484140  202985 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:40:14.484214  202985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:40:14.496954  202985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1212 20:40:14.510772  202985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:40:14.524706  202985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1212 20:40:14.541675  202985 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:40:14.545988  202985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:40:14.556980  202985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:40:14.769188  202985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:40:14.803446  202985 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181 for IP: 192.168.76.2
	I1212 20:40:14.803607  202985 certs.go:195] generating shared ca certs ...
	I1212 20:40:14.803638  202985 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:40:14.803864  202985 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 20:40:14.803955  202985 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 20:40:14.803986  202985 certs.go:257] generating profile certs ...
	I1212 20:40:14.804134  202985 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.key
	I1212 20:40:14.804358  202985 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/apiserver.key.2b603b60
	I1212 20:40:14.804445  202985 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/proxy-client.key
	I1212 20:40:14.804598  202985 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 20:40:14.804667  202985 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 20:40:14.804692  202985 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:40:14.804751  202985 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:40:14.804796  202985 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:40:14.804854  202985 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 20:40:14.804932  202985 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 20:40:14.805813  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:40:14.854700  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:40:14.909834  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:40:14.964376  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:40:15.016247  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 20:40:15.041689  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:40:15.074913  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:40:15.115271  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:40:15.153228  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:40:15.192472  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 20:40:15.233143  202985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 20:40:15.270456  202985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:40:15.286755  202985 ssh_runner.go:195] Run: openssl version
	I1212 20:40:15.300526  202985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 20:40:15.311603  202985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 20:40:15.325723  202985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 20:40:15.329791  202985 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 20:40:15.329884  202985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 20:40:15.372317  202985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:40:15.380552  202985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 20:40:15.388544  202985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 20:40:15.396657  202985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 20:40:15.400652  202985 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 20:40:15.400742  202985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 20:40:15.444327  202985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:40:15.457183  202985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:40:15.473809  202985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:40:15.484283  202985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:40:15.488319  202985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:40:15.488417  202985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:40:15.541290  202985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:40:15.553612  202985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:40:15.557638  202985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:40:15.614489  202985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:40:15.667547  202985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:40:15.734273  202985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:40:15.805914  202985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:40:15.885302  202985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:40:15.950103  202985 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-016181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-016181 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:40:15.950230  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 20:40:15.950330  202985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:40:15.985677  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:40:15.985743  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:40:15.985762  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:40:15.985780  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:40:15.985796  202985 cri.go:89] found id: ""
	I1212 20:40:15.985869  202985 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1212 20:40:16.012639  202985 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:40:16Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1212 20:40:16.012792  202985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:40:16.027905  202985 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:40:16.027994  202985 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:40:16.028076  202985 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:40:16.037445  202985 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:40:16.037954  202985 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-016181" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:40:16.038123  202985 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-2315/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-016181" cluster setting kubeconfig missing "kubernetes-upgrade-016181" context setting]
	I1212 20:40:16.038488  202985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/kubeconfig: {Name:mke1d79e374217e0c5bc78bc2d9631db0e1e9bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:40:16.039142  202985 kapi.go:59] client config for kubernetes-upgrade-016181: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.key", CAFile:"/home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:40:16.040155  202985 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:40:16.040209  202985 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:40:16.040239  202985 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:40:16.040258  202985 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:40:16.040280  202985 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:40:16.040635  202985 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:40:16.053815  202985 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 20:39:32.160152654 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:40:14.538755269 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-016181"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1212 20:40:16.053884  202985 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:40:16.053909  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 20:40:16.053991  202985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:40:16.102104  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:40:16.102176  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:40:16.102195  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:40:16.102212  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:40:16.102228  202985 cri.go:89] found id: ""
	I1212 20:40:16.102256  202985 cri.go:252] Stopping containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:40:16.102335  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:40:16.108533  202985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748
	I1212 20:40:16.166025  202985 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:40:16.192444  202985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:40:16.205051  202985 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 12 20:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 12 20:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 12 20:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 12 20:39 /etc/kubernetes/scheduler.conf
	
	I1212 20:40:16.205152  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:40:16.219488  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:40:16.229383  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:40:16.245305  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:40:16.245417  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:40:16.256996  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:40:16.269070  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:40:16.269181  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:40:16.285001  202985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:40:16.293879  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:40:16.382678  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:40:18.733006  202985 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.350295761s)
	I1212 20:40:18.733070  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:40:19.091495  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:40:19.245923  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:40:19.349911  202985 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:40:19.350038  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:19.850170  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:20.350421  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:20.850667  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:21.350687  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:21.850347  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:22.350904  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:22.851026  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:23.351103  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:23.850292  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:24.350393  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:24.850870  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:25.350168  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:25.850790  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:26.350566  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:26.850156  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:27.350730  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:27.850195  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:28.351099  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:28.850972  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:29.350847  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:29.850708  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:30.350163  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:30.850172  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:31.350943  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:31.850670  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:32.350743  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:32.850197  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:33.351017  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:33.850155  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:34.350725  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:34.850153  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:35.350151  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:35.850914  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:36.350618  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:36.850892  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:37.351124  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:37.850830  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:38.351043  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:38.850638  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:39.350147  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:39.850151  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:40.350179  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:40.850152  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:41.350873  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:41.850843  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:42.350389  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:42.850895  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:43.350953  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:43.850542  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:44.350868  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:44.850477  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:45.350811  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:45.850768  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:46.350720  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:46.850545  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:47.350481  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:47.850208  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:48.350220  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:48.851120  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:49.350848  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:49.850860  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:50.350723  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:50.850354  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:51.350930  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:51.850362  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:52.351052  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:52.850420  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:53.350108  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:53.850314  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:54.351086  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:54.850245  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:55.351108  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:55.850225  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:56.351141  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:56.850991  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:57.350215  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:57.850202  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:58.350248  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:58.850277  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:59.350703  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:40:59.850152  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:00.351024  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:00.851051  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:01.351043  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:01.850793  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:02.350873  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:02.850186  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:03.351155  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:03.850926  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:04.350809  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:04.850242  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:05.350233  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:05.850815  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:06.350695  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:06.850218  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:07.350972  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:07.851108  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:08.350536  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:08.850892  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:09.350803  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:09.850261  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:10.350890  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:10.850611  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:11.350927  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:11.850246  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:12.350621  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:12.850654  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:13.350943  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:13.850420  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:14.350937  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:14.851087  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:15.350474  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:15.851032  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:16.350854  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:16.850858  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:17.350198  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:17.850511  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:18.350957  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:18.850134  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:19.350891  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:19.350971  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:19.394160  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:19.394189  202985 cri.go:89] found id: ""
	I1212 20:41:19.394197  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:19.394250  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:19.398783  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:19.398859  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:19.434146  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:19.434165  202985 cri.go:89] found id: ""
	I1212 20:41:19.434173  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:19.434226  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:19.439203  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:19.439369  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:19.483371  202985 cri.go:89] found id: ""
	I1212 20:41:19.483393  202985 logs.go:282] 0 containers: []
	W1212 20:41:19.483401  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:19.483407  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:19.483475  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:19.527185  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:19.527208  202985 cri.go:89] found id: ""
	I1212 20:41:19.527216  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:19.527281  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:19.535207  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:19.535320  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:19.563022  202985 cri.go:89] found id: ""
	I1212 20:41:19.563048  202985 logs.go:282] 0 containers: []
	W1212 20:41:19.563057  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:19.563064  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:19.563123  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:19.601591  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:19.601615  202985 cri.go:89] found id: ""
	I1212 20:41:19.601623  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:19.601682  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:19.605689  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:19.605787  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:19.631901  202985 cri.go:89] found id: ""
	I1212 20:41:19.631927  202985 logs.go:282] 0 containers: []
	W1212 20:41:19.631936  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:19.631942  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:19.631999  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:19.664876  202985 cri.go:89] found id: ""
	I1212 20:41:19.664903  202985 logs.go:282] 0 containers: []
	W1212 20:41:19.664911  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:19.664925  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:19.664937  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:19.702741  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:19.702773  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:19.752006  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:19.752045  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:19.784406  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:19.784443  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:19.849064  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:19.849097  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:19.883682  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:19.883717  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:19.940505  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:19.940607  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:19.961866  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:19.961894  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:20.061298  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:20.061361  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:20.061389  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:22.604879  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:22.615766  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:22.615881  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:22.644867  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:22.644892  202985 cri.go:89] found id: ""
	I1212 20:41:22.644902  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:22.644956  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:22.648555  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:22.648649  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:22.673447  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:22.673470  202985 cri.go:89] found id: ""
	I1212 20:41:22.673478  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:22.673532  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:22.677209  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:22.677279  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:22.701344  202985 cri.go:89] found id: ""
	I1212 20:41:22.701373  202985 logs.go:282] 0 containers: []
	W1212 20:41:22.701382  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:22.701389  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:22.701450  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:22.730464  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:22.730488  202985 cri.go:89] found id: ""
	I1212 20:41:22.730507  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:22.730586  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:22.733996  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:22.734061  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:22.759073  202985 cri.go:89] found id: ""
	I1212 20:41:22.759095  202985 logs.go:282] 0 containers: []
	W1212 20:41:22.759103  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:22.759122  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:22.759182  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:22.786140  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:22.786159  202985 cri.go:89] found id: ""
	I1212 20:41:22.786167  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:22.786224  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:22.789889  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:22.790010  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:22.816371  202985 cri.go:89] found id: ""
	I1212 20:41:22.816393  202985 logs.go:282] 0 containers: []
	W1212 20:41:22.816401  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:22.816407  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:22.816464  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:22.842688  202985 cri.go:89] found id: ""
	I1212 20:41:22.842753  202985 logs.go:282] 0 containers: []
	W1212 20:41:22.842775  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:22.842799  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:22.842829  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:22.899529  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:22.899561  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:22.933573  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:22.933605  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:22.970422  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:22.970494  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:23.006093  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:23.006137  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:23.037452  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:23.037483  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:23.050038  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:23.050069  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:23.115679  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:23.115699  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:23.115712  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:23.146497  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:23.146529  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:25.675520  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:25.685624  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:25.685698  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:25.711543  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:25.711604  202985 cri.go:89] found id: ""
	I1212 20:41:25.711636  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:25.711722  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:25.715632  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:25.715700  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:25.739718  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:25.739740  202985 cri.go:89] found id: ""
	I1212 20:41:25.739749  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:25.739803  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:25.743514  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:25.743597  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:25.768849  202985 cri.go:89] found id: ""
	I1212 20:41:25.768874  202985 logs.go:282] 0 containers: []
	W1212 20:41:25.768891  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:25.768899  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:25.768955  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:25.792811  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:25.792872  202985 cri.go:89] found id: ""
	I1212 20:41:25.792887  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:25.792945  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:25.796599  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:25.796696  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:25.826660  202985 cri.go:89] found id: ""
	I1212 20:41:25.826697  202985 logs.go:282] 0 containers: []
	W1212 20:41:25.826707  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:25.826729  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:25.826811  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:25.854487  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:25.854556  202985 cri.go:89] found id: ""
	I1212 20:41:25.854568  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:25.854645  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:25.858359  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:25.858431  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:25.884174  202985 cri.go:89] found id: ""
	I1212 20:41:25.884197  202985 logs.go:282] 0 containers: []
	W1212 20:41:25.884206  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:25.884212  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:25.884275  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:25.911364  202985 cri.go:89] found id: ""
	I1212 20:41:25.911397  202985 logs.go:282] 0 containers: []
	W1212 20:41:25.911406  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:25.911420  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:25.911431  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:25.939115  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:25.939149  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:25.952153  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:25.952182  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:25.986076  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:25.986109  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:26.021010  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:26.021044  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:26.051439  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:26.051469  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:26.113414  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:26.113446  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:26.177376  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:26.177395  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:26.177408  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:26.230041  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:26.230083  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:28.784021  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:28.793924  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:28.794002  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:28.818765  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:28.818787  202985 cri.go:89] found id: ""
	I1212 20:41:28.818796  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:28.818849  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:28.822443  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:28.822511  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:28.850223  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:28.850244  202985 cri.go:89] found id: ""
	I1212 20:41:28.850252  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:28.850316  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:28.853928  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:28.854236  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:28.883396  202985 cri.go:89] found id: ""
	I1212 20:41:28.883422  202985 logs.go:282] 0 containers: []
	W1212 20:41:28.883431  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:28.883437  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:28.883498  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:28.913802  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:28.913828  202985 cri.go:89] found id: ""
	I1212 20:41:28.913836  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:28.913896  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:28.917647  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:28.917762  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:28.946045  202985 cri.go:89] found id: ""
	I1212 20:41:28.946106  202985 logs.go:282] 0 containers: []
	W1212 20:41:28.946130  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:28.946148  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:28.946218  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:28.974670  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:28.974732  202985 cri.go:89] found id: ""
	I1212 20:41:28.974754  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:28.974832  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:28.978327  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:28.978434  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:29.002398  202985 cri.go:89] found id: ""
	I1212 20:41:29.002463  202985 logs.go:282] 0 containers: []
	W1212 20:41:29.002488  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:29.002506  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:29.002578  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:29.028174  202985 cri.go:89] found id: ""
	I1212 20:41:29.028195  202985 logs.go:282] 0 containers: []
	W1212 20:41:29.028204  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:29.028220  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:29.028234  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:29.070587  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:29.070617  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:29.109329  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:29.109367  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:29.139580  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:29.139613  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:29.203009  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:29.203044  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:29.220571  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:29.220598  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:29.293701  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:29.293721  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:29.293734  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:29.338853  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:29.338886  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:29.369469  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:29.369498  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:31.897194  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:31.907098  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:31.907168  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:31.933546  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:31.933566  202985 cri.go:89] found id: ""
	I1212 20:41:31.933577  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:31.933634  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:31.937430  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:31.937502  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:31.964722  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:31.964747  202985 cri.go:89] found id: ""
	I1212 20:41:31.964755  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:31.964810  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:31.968311  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:31.968385  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:31.993172  202985 cri.go:89] found id: ""
	I1212 20:41:31.993194  202985 logs.go:282] 0 containers: []
	W1212 20:41:31.993202  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:31.993208  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:31.993269  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:32.020006  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:32.020027  202985 cri.go:89] found id: ""
	I1212 20:41:32.020035  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:32.020095  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:32.023887  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:32.023981  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:32.049124  202985 cri.go:89] found id: ""
	I1212 20:41:32.049147  202985 logs.go:282] 0 containers: []
	W1212 20:41:32.049156  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:32.049161  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:32.049220  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:32.074474  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:32.074494  202985 cri.go:89] found id: ""
	I1212 20:41:32.074503  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:32.074560  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:32.078244  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:32.078337  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:32.103357  202985 cri.go:89] found id: ""
	I1212 20:41:32.103385  202985 logs.go:282] 0 containers: []
	W1212 20:41:32.103394  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:32.103401  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:32.103505  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:32.128875  202985 cri.go:89] found id: ""
	I1212 20:41:32.128981  202985 logs.go:282] 0 containers: []
	W1212 20:41:32.129002  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:32.129018  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:32.129030  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:32.186665  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:32.186700  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:32.275092  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:32.275123  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:32.275137  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:32.310762  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:32.310796  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:32.351936  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:32.351969  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:32.393805  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:32.393834  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:32.406325  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:32.406358  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:32.442809  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:32.442843  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:32.472573  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:32.472607  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:35.006075  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:35.019827  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:35.019927  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:35.050233  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:35.050304  202985 cri.go:89] found id: ""
	I1212 20:41:35.050327  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:35.050398  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:35.053949  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:35.054016  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:35.078709  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:35.078772  202985 cri.go:89] found id: ""
	I1212 20:41:35.078791  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:35.078877  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:35.082705  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:35.082781  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:35.108539  202985 cri.go:89] found id: ""
	I1212 20:41:35.108562  202985 logs.go:282] 0 containers: []
	W1212 20:41:35.108571  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:35.108577  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:35.108641  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:35.134395  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:35.134419  202985 cri.go:89] found id: ""
	I1212 20:41:35.134433  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:35.134490  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:35.138089  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:35.138164  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:35.167862  202985 cri.go:89] found id: ""
	I1212 20:41:35.167930  202985 logs.go:282] 0 containers: []
	W1212 20:41:35.167954  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:35.167977  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:35.168078  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:35.204402  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:35.204426  202985 cri.go:89] found id: ""
	I1212 20:41:35.204434  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:35.204489  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:35.208452  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:35.208533  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:35.234101  202985 cri.go:89] found id: ""
	I1212 20:41:35.234128  202985 logs.go:282] 0 containers: []
	W1212 20:41:35.234146  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:35.234159  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:35.234227  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:35.263742  202985 cri.go:89] found id: ""
	I1212 20:41:35.263770  202985 logs.go:282] 0 containers: []
	W1212 20:41:35.263787  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:35.263801  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:35.263813  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:35.295480  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:35.295513  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:35.353909  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:35.353941  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:35.367160  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:35.367189  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:35.402072  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:35.402105  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:35.439294  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:35.439329  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:35.468891  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:35.468926  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:35.537278  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:35.537298  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:35.537313  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:35.574373  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:35.574720  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:38.111966  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:38.122319  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:38.122385  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:38.146395  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:38.146419  202985 cri.go:89] found id: ""
	I1212 20:41:38.146427  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:38.146483  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:38.150016  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:38.150088  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:38.176071  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:38.176091  202985 cri.go:89] found id: ""
	I1212 20:41:38.176098  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:38.176163  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:38.179687  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:38.179759  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:38.224252  202985 cri.go:89] found id: ""
	I1212 20:41:38.224277  202985 logs.go:282] 0 containers: []
	W1212 20:41:38.224286  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:38.224292  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:38.224354  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:38.255089  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:38.255113  202985 cri.go:89] found id: ""
	I1212 20:41:38.255123  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:38.255179  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:38.259295  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:38.259372  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:38.284918  202985 cri.go:89] found id: ""
	I1212 20:41:38.284941  202985 logs.go:282] 0 containers: []
	W1212 20:41:38.284955  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:38.284962  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:38.285018  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:38.312104  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:38.312124  202985 cri.go:89] found id: ""
	I1212 20:41:38.312132  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:38.312190  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:38.315652  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:38.315717  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:38.341566  202985 cri.go:89] found id: ""
	I1212 20:41:38.341588  202985 logs.go:282] 0 containers: []
	W1212 20:41:38.341596  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:38.341602  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:38.341660  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:38.366130  202985 cri.go:89] found id: ""
	I1212 20:41:38.366155  202985 logs.go:282] 0 containers: []
	W1212 20:41:38.366163  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:38.366176  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:38.366187  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:38.398751  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:38.398785  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:38.429710  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:38.429739  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:38.486410  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:38.486443  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:38.499396  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:38.499429  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:38.532844  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:38.532876  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:38.571063  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:38.571098  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:38.602221  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:38.602250  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:38.669079  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:38.669099  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:38.669113  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:41.203873  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:41.224796  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:41.224870  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:41.269216  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:41.269240  202985 cri.go:89] found id: ""
	I1212 20:41:41.269248  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:41.269303  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:41.273749  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:41.273819  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:41.313184  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:41.313209  202985 cri.go:89] found id: ""
	I1212 20:41:41.313217  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:41.313277  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:41.318937  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:41.319008  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:41.372973  202985 cri.go:89] found id: ""
	I1212 20:41:41.372995  202985 logs.go:282] 0 containers: []
	W1212 20:41:41.373003  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:41.373010  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:41.373064  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:41.404548  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:41.404568  202985 cri.go:89] found id: ""
	I1212 20:41:41.404576  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:41.404635  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:41.408860  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:41.408935  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:41.437595  202985 cri.go:89] found id: ""
	I1212 20:41:41.437616  202985 logs.go:282] 0 containers: []
	W1212 20:41:41.437625  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:41.437631  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:41.437690  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:41.465061  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:41.465134  202985 cri.go:89] found id: ""
	I1212 20:41:41.465157  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:41.465242  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:41.469506  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:41.469576  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:41.497648  202985 cri.go:89] found id: ""
	I1212 20:41:41.497670  202985 logs.go:282] 0 containers: []
	W1212 20:41:41.497678  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:41.497684  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:41.497741  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:41.534028  202985 cri.go:89] found id: ""
	I1212 20:41:41.534102  202985 logs.go:282] 0 containers: []
	W1212 20:41:41.534123  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:41.534198  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:41.534227  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:41.604860  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:41.604937  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:41.618372  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:41.618398  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:41.667251  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:41.667284  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:41.706252  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:41.706282  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:41.770507  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:41.770527  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:41.770539  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:41.807179  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:41.807212  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:41.838248  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:41.838281  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:41.867818  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:41.867859  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:44.400113  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:44.410339  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:44.410403  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:44.440444  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:44.440463  202985 cri.go:89] found id: ""
	I1212 20:41:44.440472  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:44.440530  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:44.445537  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:44.445610  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:44.484977  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:44.484998  202985 cri.go:89] found id: ""
	I1212 20:41:44.485005  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:44.485063  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:44.489035  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:44.489103  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:44.518244  202985 cri.go:89] found id: ""
	I1212 20:41:44.518267  202985 logs.go:282] 0 containers: []
	W1212 20:41:44.518276  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:44.518283  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:44.518347  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:44.546656  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:44.546730  202985 cri.go:89] found id: ""
	I1212 20:41:44.546756  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:44.546840  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:44.550993  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:44.551062  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:44.582105  202985 cri.go:89] found id: ""
	I1212 20:41:44.582172  202985 logs.go:282] 0 containers: []
	W1212 20:41:44.582193  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:44.582211  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:44.582307  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:44.621777  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:44.621851  202985 cri.go:89] found id: ""
	I1212 20:41:44.621871  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:44.621954  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:44.626372  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:44.626481  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:44.655376  202985 cri.go:89] found id: ""
	I1212 20:41:44.655452  202985 logs.go:282] 0 containers: []
	W1212 20:41:44.655473  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:44.655490  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:44.655572  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:44.693018  202985 cri.go:89] found id: ""
	I1212 20:41:44.693083  202985 logs.go:282] 0 containers: []
	W1212 20:41:44.693108  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:44.693135  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:44.693172  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:44.757408  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:44.757499  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:44.774461  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:44.774535  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:44.816037  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:44.816074  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:44.851077  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:44.851106  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:44.896458  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:44.896485  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:45.005960  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:45.005980  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:45.005998  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:45.086337  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:45.086423  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:45.149129  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:45.149175  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:47.737963  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:47.747638  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:47.747706  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:47.771714  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:47.771745  202985 cri.go:89] found id: ""
	I1212 20:41:47.771759  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:47.771813  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:47.775295  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:47.775361  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:47.800618  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:47.800638  202985 cri.go:89] found id: ""
	I1212 20:41:47.800654  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:47.800705  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:47.804726  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:47.804797  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:47.838681  202985 cri.go:89] found id: ""
	I1212 20:41:47.838708  202985 logs.go:282] 0 containers: []
	W1212 20:41:47.838719  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:47.838725  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:47.838784  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:47.878115  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:47.878139  202985 cri.go:89] found id: ""
	I1212 20:41:47.878148  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:47.878203  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:47.882310  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:47.882384  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:47.914029  202985 cri.go:89] found id: ""
	I1212 20:41:47.914065  202985 logs.go:282] 0 containers: []
	W1212 20:41:47.914073  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:47.914079  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:47.914134  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:47.947745  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:47.947768  202985 cri.go:89] found id: ""
	I1212 20:41:47.947776  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:47.947829  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:47.951640  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:47.951714  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:47.990053  202985 cri.go:89] found id: ""
	I1212 20:41:47.990077  202985 logs.go:282] 0 containers: []
	W1212 20:41:47.990096  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:47.990104  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:47.990172  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:48.022397  202985 cri.go:89] found id: ""
	I1212 20:41:48.022426  202985 logs.go:282] 0 containers: []
	W1212 20:41:48.022445  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:48.022459  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:48.022472  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:48.040643  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:48.040675  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:48.123405  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:48.123431  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:48.123488  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:48.185359  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:48.185392  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:48.244127  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:48.244199  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:48.301258  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:48.301340  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:48.351854  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:48.351936  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:48.387695  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:48.387794  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:48.421319  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:48.421400  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:50.950796  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:50.960810  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:50.960876  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:51.000548  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:51.000573  202985 cri.go:89] found id: ""
	I1212 20:41:51.000582  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:51.000637  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:51.004298  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:51.004369  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:51.034977  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:51.034997  202985 cri.go:89] found id: ""
	I1212 20:41:51.035006  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:51.035062  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:51.038720  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:51.038788  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:51.065962  202985 cri.go:89] found id: ""
	I1212 20:41:51.066038  202985 logs.go:282] 0 containers: []
	W1212 20:41:51.066065  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:51.066084  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:51.066149  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:51.092697  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:51.092722  202985 cri.go:89] found id: ""
	I1212 20:41:51.092731  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:51.092792  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:51.096524  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:51.096596  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:51.122395  202985 cri.go:89] found id: ""
	I1212 20:41:51.122420  202985 logs.go:282] 0 containers: []
	W1212 20:41:51.122429  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:51.122435  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:51.122512  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:51.149315  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:51.149387  202985 cri.go:89] found id: ""
	I1212 20:41:51.149410  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:51.149494  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:51.153189  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:51.153306  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:51.199028  202985 cri.go:89] found id: ""
	I1212 20:41:51.199051  202985 logs.go:282] 0 containers: []
	W1212 20:41:51.199060  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:51.199067  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:51.199129  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:51.229103  202985 cri.go:89] found id: ""
	I1212 20:41:51.229127  202985 logs.go:282] 0 containers: []
	W1212 20:41:51.229135  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:51.229148  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:51.229162  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:51.242298  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:51.242324  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:51.307164  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:51.307184  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:51.307211  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:51.339425  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:51.339455  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:51.389950  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:51.389977  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:51.462100  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:51.462132  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:51.519832  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:51.519872  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:51.566207  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:51.566240  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:51.607312  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:51.607346  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:54.144902  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:54.155821  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:54.155923  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:54.181412  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:54.181435  202985 cri.go:89] found id: ""
	I1212 20:41:54.181443  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:54.181500  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:54.185139  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:54.185213  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:54.224180  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:54.224198  202985 cri.go:89] found id: ""
	I1212 20:41:54.224207  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:54.224264  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:54.228694  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:54.228763  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:54.263694  202985 cri.go:89] found id: ""
	I1212 20:41:54.263717  202985 logs.go:282] 0 containers: []
	W1212 20:41:54.263726  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:54.263732  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:54.263800  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:54.289762  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:54.289784  202985 cri.go:89] found id: ""
	I1212 20:41:54.289792  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:54.289848  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:54.293571  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:54.293645  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:54.318550  202985 cri.go:89] found id: ""
	I1212 20:41:54.318572  202985 logs.go:282] 0 containers: []
	W1212 20:41:54.318581  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:54.318587  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:54.318649  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:54.344598  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:54.344620  202985 cri.go:89] found id: ""
	I1212 20:41:54.344629  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:54.344686  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:54.348453  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:54.348526  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:54.374163  202985 cri.go:89] found id: ""
	I1212 20:41:54.374188  202985 logs.go:282] 0 containers: []
	W1212 20:41:54.374196  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:54.374203  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:54.374263  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:54.398620  202985 cri.go:89] found id: ""
	I1212 20:41:54.398643  202985 logs.go:282] 0 containers: []
	W1212 20:41:54.398651  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:54.398664  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:54.398675  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:54.432332  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:54.432361  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:54.469805  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:54.469837  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:54.500523  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:54.500554  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:41:54.529615  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:54.529674  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:54.559436  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:54.559503  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:54.622447  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:54.622468  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:54.622481  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:54.660001  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:54.660034  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:54.720409  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:54.720444  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:57.233497  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:41:57.245756  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:41:57.245851  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:41:57.272613  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:57.272638  202985 cri.go:89] found id: ""
	I1212 20:41:57.272647  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:41:57.272707  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:57.276282  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:41:57.276351  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:41:57.301157  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:57.301177  202985 cri.go:89] found id: ""
	I1212 20:41:57.301186  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:41:57.301242  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:57.304981  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:41:57.305055  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:41:57.328904  202985 cri.go:89] found id: ""
	I1212 20:41:57.328928  202985 logs.go:282] 0 containers: []
	W1212 20:41:57.328936  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:41:57.328942  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:41:57.329001  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:41:57.357042  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:57.357105  202985 cri.go:89] found id: ""
	I1212 20:41:57.357133  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:41:57.357195  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:57.360841  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:41:57.360965  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:41:57.389845  202985 cri.go:89] found id: ""
	I1212 20:41:57.389868  202985 logs.go:282] 0 containers: []
	W1212 20:41:57.389877  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:41:57.389883  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:41:57.389941  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:41:57.415702  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:57.415725  202985 cri.go:89] found id: ""
	I1212 20:41:57.415734  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:41:57.415790  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:41:57.419471  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:41:57.419547  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:41:57.444899  202985 cri.go:89] found id: ""
	I1212 20:41:57.444920  202985 logs.go:282] 0 containers: []
	W1212 20:41:57.444929  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:41:57.444935  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:41:57.444997  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:41:57.471736  202985 cri.go:89] found id: ""
	I1212 20:41:57.471810  202985 logs.go:282] 0 containers: []
	W1212 20:41:57.471871  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:41:57.471897  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:41:57.471912  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:41:57.484734  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:41:57.484763  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:41:57.523537  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:41:57.523575  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:41:57.563171  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:41:57.563202  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:41:57.620897  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:41:57.620933  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:41:57.682573  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:41:57.682598  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:41:57.682611  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:41:57.718645  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:41:57.718675  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:41:57.753214  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:41:57.753249  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:41:57.786860  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:41:57.786887  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:00.328146  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:00.342136  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:00.342209  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:00.386478  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:00.386500  202985 cri.go:89] found id: ""
	I1212 20:42:00.386509  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:00.386574  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:00.391491  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:00.391575  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:00.426758  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:00.426837  202985 cri.go:89] found id: ""
	I1212 20:42:00.426860  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:00.426956  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:00.431744  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:00.431823  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:00.462737  202985 cri.go:89] found id: ""
	I1212 20:42:00.462808  202985 logs.go:282] 0 containers: []
	W1212 20:42:00.462831  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:00.462849  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:00.462939  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:00.490343  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:00.490422  202985 cri.go:89] found id: ""
	I1212 20:42:00.490446  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:00.490565  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:00.494919  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:00.494992  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:00.520322  202985 cri.go:89] found id: ""
	I1212 20:42:00.520398  202985 logs.go:282] 0 containers: []
	W1212 20:42:00.520414  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:00.520421  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:00.520628  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:00.546136  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:00.546158  202985 cri.go:89] found id: ""
	I1212 20:42:00.546167  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:00.546237  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:00.549967  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:00.550049  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:00.574494  202985 cri.go:89] found id: ""
	I1212 20:42:00.574518  202985 logs.go:282] 0 containers: []
	W1212 20:42:00.574526  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:00.574533  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:00.574589  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:00.599261  202985 cri.go:89] found id: ""
	I1212 20:42:00.599286  202985 logs.go:282] 0 containers: []
	W1212 20:42:00.599295  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:00.599309  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:00.599321  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:00.663616  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:00.663642  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:00.663656  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:00.709368  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:00.709401  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:00.744731  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:00.744763  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:00.775608  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:00.775640  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:00.810003  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:00.810036  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:00.844129  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:00.844158  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:00.874477  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:00.874502  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:00.935691  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:00.935726  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:03.449099  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:03.460768  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:03.460836  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:03.504402  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:03.504423  202985 cri.go:89] found id: ""
	I1212 20:42:03.504432  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:03.504486  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:03.508589  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:03.508662  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:03.538388  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:03.538409  202985 cri.go:89] found id: ""
	I1212 20:42:03.538418  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:03.538509  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:03.542071  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:03.542156  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:03.570573  202985 cri.go:89] found id: ""
	I1212 20:42:03.570597  202985 logs.go:282] 0 containers: []
	W1212 20:42:03.570606  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:03.570612  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:03.570671  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:03.595784  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:03.595806  202985 cri.go:89] found id: ""
	I1212 20:42:03.595814  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:03.595897  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:03.599608  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:03.599681  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:03.625828  202985 cri.go:89] found id: ""
	I1212 20:42:03.625851  202985 logs.go:282] 0 containers: []
	W1212 20:42:03.625860  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:03.625897  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:03.625973  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:03.651518  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:03.651536  202985 cri.go:89] found id: ""
	I1212 20:42:03.651544  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:03.651602  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:03.655252  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:03.655322  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:03.680531  202985 cri.go:89] found id: ""
	I1212 20:42:03.680554  202985 logs.go:282] 0 containers: []
	W1212 20:42:03.680562  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:03.680568  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:03.680633  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:03.705680  202985 cri.go:89] found id: ""
	I1212 20:42:03.705703  202985 logs.go:282] 0 containers: []
	W1212 20:42:03.705711  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:03.705726  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:03.705739  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:03.742918  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:03.742947  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:03.798238  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:03.798309  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:03.847684  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:03.847769  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:03.916747  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:03.916787  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:03.930180  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:03.930207  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:04.022844  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:04.022866  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:04.022879  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:04.073988  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:04.074046  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:04.114715  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:04.114759  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:06.670973  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:06.680707  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:06.680820  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:06.704673  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:06.704735  202985 cri.go:89] found id: ""
	I1212 20:42:06.704757  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:06.704825  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:06.708498  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:06.708572  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:06.732563  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:06.732586  202985 cri.go:89] found id: ""
	I1212 20:42:06.732594  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:06.732648  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:06.736027  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:06.736089  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:06.759121  202985 cri.go:89] found id: ""
	I1212 20:42:06.759143  202985 logs.go:282] 0 containers: []
	W1212 20:42:06.759151  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:06.759157  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:06.759216  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:06.785649  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:06.785668  202985 cri.go:89] found id: ""
	I1212 20:42:06.785677  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:06.785734  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:06.789306  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:06.789373  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:06.814328  202985 cri.go:89] found id: ""
	I1212 20:42:06.814360  202985 logs.go:282] 0 containers: []
	W1212 20:42:06.814368  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:06.814374  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:06.814442  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:06.842646  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:06.842670  202985 cri.go:89] found id: ""
	I1212 20:42:06.842678  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:06.842733  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:06.846369  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:06.846443  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:06.871952  202985 cri.go:89] found id: ""
	I1212 20:42:06.872035  202985 logs.go:282] 0 containers: []
	W1212 20:42:06.872060  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:06.872067  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:06.872147  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:06.896456  202985 cri.go:89] found id: ""
	I1212 20:42:06.896481  202985 logs.go:282] 0 containers: []
	W1212 20:42:06.896489  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:06.896503  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:06.896520  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:06.924706  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:06.924738  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:06.990642  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:06.990673  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:06.990686  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:07.027506  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:07.027536  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:07.063027  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:07.063058  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:07.104543  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:07.104575  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:07.133782  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:07.133808  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:07.192471  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:07.192552  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:07.209304  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:07.209329  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:09.758090  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:09.768296  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:09.768372  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:09.793456  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:09.793477  202985 cri.go:89] found id: ""
	I1212 20:42:09.793485  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:09.793542  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:09.797194  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:09.797276  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:09.821714  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:09.821784  202985 cri.go:89] found id: ""
	I1212 20:42:09.821805  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:09.821891  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:09.825513  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:09.825689  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:09.853249  202985 cri.go:89] found id: ""
	I1212 20:42:09.853274  202985 logs.go:282] 0 containers: []
	W1212 20:42:09.853283  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:09.853289  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:09.853344  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:09.877202  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:09.877225  202985 cri.go:89] found id: ""
	I1212 20:42:09.877233  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:09.877288  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:09.880821  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:09.880889  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:09.903936  202985 cri.go:89] found id: ""
	I1212 20:42:09.904008  202985 logs.go:282] 0 containers: []
	W1212 20:42:09.904031  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:09.904049  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:09.904138  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:09.928830  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:09.928854  202985 cri.go:89] found id: ""
	I1212 20:42:09.928862  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:09.928919  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:09.932554  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:09.932659  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:09.958064  202985 cri.go:89] found id: ""
	I1212 20:42:09.958089  202985 logs.go:282] 0 containers: []
	W1212 20:42:09.958098  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:09.958104  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:09.958168  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:09.986250  202985 cri.go:89] found id: ""
	I1212 20:42:09.986272  202985 logs.go:282] 0 containers: []
	W1212 20:42:09.986281  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:09.986294  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:09.986305  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:10.046243  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:10.046276  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:10.066270  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:10.066309  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:10.107049  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:10.107079  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:10.151327  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:10.151360  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:10.242018  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:10.242036  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:10.242048  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:10.330493  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:10.330528  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:10.374139  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:10.374173  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:10.405744  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:10.405779  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:12.951949  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:12.962407  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:12.962468  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:13.002705  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:13.002724  202985 cri.go:89] found id: ""
	I1212 20:42:13.002732  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:13.002787  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:13.009503  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:13.009582  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:13.036948  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:13.036967  202985 cri.go:89] found id: ""
	I1212 20:42:13.036975  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:13.037028  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:13.040637  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:13.040694  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:13.074090  202985 cri.go:89] found id: ""
	I1212 20:42:13.074112  202985 logs.go:282] 0 containers: []
	W1212 20:42:13.074120  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:13.074126  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:13.074184  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:13.099359  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:13.099377  202985 cri.go:89] found id: ""
	I1212 20:42:13.099385  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:13.099437  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:13.103015  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:13.103077  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:13.130061  202985 cri.go:89] found id: ""
	I1212 20:42:13.130081  202985 logs.go:282] 0 containers: []
	W1212 20:42:13.130089  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:13.130095  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:13.130149  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:13.158007  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:13.158080  202985 cri.go:89] found id: ""
	I1212 20:42:13.158102  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:13.158183  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:13.161879  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:13.161940  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:13.205145  202985 cri.go:89] found id: ""
	I1212 20:42:13.205166  202985 logs.go:282] 0 containers: []
	W1212 20:42:13.205174  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:13.205180  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:13.205236  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:13.290214  202985 cri.go:89] found id: ""
	I1212 20:42:13.290234  202985 logs.go:282] 0 containers: []
	W1212 20:42:13.290243  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:13.290257  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:13.290269  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:13.343276  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:13.343310  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:13.398674  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:13.398708  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:13.442344  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:13.442377  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:13.478450  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:13.478486  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:13.539962  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:13.539997  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:13.577797  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:13.577831  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:13.619164  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:13.619192  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:13.633302  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:13.633327  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:13.717523  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:16.218451  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:16.229371  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:16.229443  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:16.259194  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:16.259217  202985 cri.go:89] found id: ""
	I1212 20:42:16.259225  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:16.259283  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:16.262750  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:16.262817  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:16.287204  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:16.287223  202985 cri.go:89] found id: ""
	I1212 20:42:16.287231  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:16.287293  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:16.290908  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:16.290976  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:16.315347  202985 cri.go:89] found id: ""
	I1212 20:42:16.315371  202985 logs.go:282] 0 containers: []
	W1212 20:42:16.315380  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:16.315386  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:16.315442  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:16.341451  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:16.341473  202985 cri.go:89] found id: ""
	I1212 20:42:16.341481  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:16.341562  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:16.345279  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:16.345354  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:16.369259  202985 cri.go:89] found id: ""
	I1212 20:42:16.369281  202985 logs.go:282] 0 containers: []
	W1212 20:42:16.369289  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:16.369294  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:16.369351  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:16.398438  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:16.398462  202985 cri.go:89] found id: ""
	I1212 20:42:16.398471  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:16.398559  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:16.402220  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:16.402289  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:16.426804  202985 cri.go:89] found id: ""
	I1212 20:42:16.426826  202985 logs.go:282] 0 containers: []
	W1212 20:42:16.426834  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:16.426841  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:16.426901  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:16.455990  202985 cri.go:89] found id: ""
	I1212 20:42:16.456015  202985 logs.go:282] 0 containers: []
	W1212 20:42:16.456029  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:16.456043  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:16.456055  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:16.513371  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:16.513404  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:16.575471  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:16.575493  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:16.575507  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:16.610694  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:16.610727  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:16.656216  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:16.656246  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:16.668920  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:16.668948  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:16.703488  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:16.703522  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:16.742722  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:16.742756  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:16.773538  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:16.773572  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:19.315647  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:19.325614  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:19.325680  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:19.357290  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:19.357310  202985 cri.go:89] found id: ""
	I1212 20:42:19.357319  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:19.357375  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:19.361017  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:19.361084  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:19.386744  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:19.386763  202985 cri.go:89] found id: ""
	I1212 20:42:19.386771  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:19.386836  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:19.390549  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:19.390615  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:19.414487  202985 cri.go:89] found id: ""
	I1212 20:42:19.414558  202985 logs.go:282] 0 containers: []
	W1212 20:42:19.414580  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:19.414598  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:19.414672  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:19.440089  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:19.440151  202985 cri.go:89] found id: ""
	I1212 20:42:19.440175  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:19.440243  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:19.443758  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:19.443895  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:19.468031  202985 cri.go:89] found id: ""
	I1212 20:42:19.468095  202985 logs.go:282] 0 containers: []
	W1212 20:42:19.468118  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:19.468143  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:19.468218  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:19.492224  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:19.492246  202985 cri.go:89] found id: ""
	I1212 20:42:19.492255  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:19.492328  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:19.495712  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:19.495820  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:19.518622  202985 cri.go:89] found id: ""
	I1212 20:42:19.518646  202985 logs.go:282] 0 containers: []
	W1212 20:42:19.518655  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:19.518661  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:19.518720  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:19.543934  202985 cri.go:89] found id: ""
	I1212 20:42:19.543966  202985 logs.go:282] 0 containers: []
	W1212 20:42:19.543977  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:19.543993  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:19.544004  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:19.585531  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:19.585562  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:19.624469  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:19.624505  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:19.658429  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:19.658458  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:19.716344  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:19.716375  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:19.736846  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:19.736923  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:19.781508  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:19.781538  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:19.810334  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:19.810363  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:19.879633  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:19.879656  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:19.879670  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:22.413940  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:22.423816  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:22.423903  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:22.448564  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:22.448584  202985 cri.go:89] found id: ""
	I1212 20:42:22.448593  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:22.448648  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:22.452300  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:22.452370  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:22.475889  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:22.475913  202985 cri.go:89] found id: ""
	I1212 20:42:22.475921  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:22.476003  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:22.479562  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:22.479632  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:22.503782  202985 cri.go:89] found id: ""
	I1212 20:42:22.503807  202985 logs.go:282] 0 containers: []
	W1212 20:42:22.503817  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:22.503823  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:22.503910  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:22.531802  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:22.531825  202985 cri.go:89] found id: ""
	I1212 20:42:22.531867  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:22.531925  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:22.535322  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:22.535388  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:22.559165  202985 cri.go:89] found id: ""
	I1212 20:42:22.559192  202985 logs.go:282] 0 containers: []
	W1212 20:42:22.559201  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:22.559208  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:22.559270  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:22.584038  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:22.584061  202985 cri.go:89] found id: ""
	I1212 20:42:22.584070  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:22.584126  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:22.587619  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:22.587692  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:22.613218  202985 cri.go:89] found id: ""
	I1212 20:42:22.613243  202985 logs.go:282] 0 containers: []
	W1212 20:42:22.613254  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:22.613260  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:22.613321  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:22.641944  202985 cri.go:89] found id: ""
	I1212 20:42:22.641966  202985 logs.go:282] 0 containers: []
	W1212 20:42:22.641974  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:22.641987  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:22.641998  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:22.654544  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:22.654567  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:22.725087  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:22.725107  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:22.725120  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:22.757911  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:22.757940  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:22.790088  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:22.790116  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:22.818585  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:22.818615  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:22.877390  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:22.877423  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:22.918275  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:22.918307  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:22.955310  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:22.955381  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:25.488007  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:25.498843  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:25.498907  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:25.529147  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:25.529165  202985 cri.go:89] found id: ""
	I1212 20:42:25.529174  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:25.529232  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:25.534454  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:25.534519  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:25.573069  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:25.573091  202985 cri.go:89] found id: ""
	I1212 20:42:25.573099  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:25.573157  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:25.577096  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:25.577166  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:25.609785  202985 cri.go:89] found id: ""
	I1212 20:42:25.609807  202985 logs.go:282] 0 containers: []
	W1212 20:42:25.609816  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:25.609822  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:25.609878  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:25.643605  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:25.643623  202985 cri.go:89] found id: ""
	I1212 20:42:25.643631  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:25.643686  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:25.647694  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:25.647764  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:25.690033  202985 cri.go:89] found id: ""
	I1212 20:42:25.690054  202985 logs.go:282] 0 containers: []
	W1212 20:42:25.690064  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:25.690070  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:25.690125  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:25.730155  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:25.730175  202985 cri.go:89] found id: ""
	I1212 20:42:25.730183  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:25.730240  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:25.734087  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:25.734159  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:25.766653  202985 cri.go:89] found id: ""
	I1212 20:42:25.766679  202985 logs.go:282] 0 containers: []
	W1212 20:42:25.766688  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:25.766694  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:25.766751  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:25.795380  202985 cri.go:89] found id: ""
	I1212 20:42:25.795403  202985 logs.go:282] 0 containers: []
	W1212 20:42:25.795412  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:25.795426  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:25.795437  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:25.873930  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:25.873974  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:25.887489  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:25.887521  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:25.925876  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:25.925911  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:25.974051  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:25.974138  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:26.050763  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:26.050789  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:26.050801  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:26.085929  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:26.085971  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:26.118415  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:26.118445  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:26.147308  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:26.147344  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:28.678912  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:28.692483  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:28.692556  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:28.734831  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:28.734850  202985 cri.go:89] found id: ""
	I1212 20:42:28.734858  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:28.734912  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:28.739040  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:28.739100  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:28.770538  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:28.770556  202985 cri.go:89] found id: ""
	I1212 20:42:28.770564  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:28.770617  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:28.776186  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:28.776253  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:28.812718  202985 cri.go:89] found id: ""
	I1212 20:42:28.812738  202985 logs.go:282] 0 containers: []
	W1212 20:42:28.812747  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:28.812753  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:28.812808  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:28.839697  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:28.839714  202985 cri.go:89] found id: ""
	I1212 20:42:28.839723  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:28.839777  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:28.843874  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:28.843967  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:28.873905  202985 cri.go:89] found id: ""
	I1212 20:42:28.873935  202985 logs.go:282] 0 containers: []
	W1212 20:42:28.873944  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:28.873950  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:28.874004  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:28.902156  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:28.902179  202985 cri.go:89] found id: ""
	I1212 20:42:28.902187  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:28.902249  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:28.906211  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:28.906281  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:28.952279  202985 cri.go:89] found id: ""
	I1212 20:42:28.952305  202985 logs.go:282] 0 containers: []
	W1212 20:42:28.952314  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:28.952319  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:28.952377  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:29.017226  202985 cri.go:89] found id: ""
	I1212 20:42:29.017250  202985 logs.go:282] 0 containers: []
	W1212 20:42:29.017258  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:29.017273  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:29.017284  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:29.086715  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:29.086746  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:29.101777  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:29.101803  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:29.177980  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:29.178005  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:29.178020  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:29.220790  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:29.220822  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:29.268291  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:29.268323  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:29.319546  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:29.319573  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:29.365966  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:29.365999  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:29.416763  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:29.416793  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:31.950694  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:31.963373  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:31.963439  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:32.021144  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:32.021169  202985 cri.go:89] found id: ""
	I1212 20:42:32.021177  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:32.021235  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:32.036555  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:32.036634  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:32.073324  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:32.073343  202985 cri.go:89] found id: ""
	I1212 20:42:32.073351  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:32.073407  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:32.077773  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:32.077842  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:32.113283  202985 cri.go:89] found id: ""
	I1212 20:42:32.113304  202985 logs.go:282] 0 containers: []
	W1212 20:42:32.113313  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:32.113318  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:32.113375  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:32.142106  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:32.142176  202985 cri.go:89] found id: ""
	I1212 20:42:32.142197  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:32.142287  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:32.146660  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:32.146780  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:32.188694  202985 cri.go:89] found id: ""
	I1212 20:42:32.188770  202985 logs.go:282] 0 containers: []
	W1212 20:42:32.188793  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:32.188811  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:32.188899  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:32.237993  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:32.238063  202985 cri.go:89] found id: ""
	I1212 20:42:32.238085  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:32.238177  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:32.242294  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:32.242415  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:32.272411  202985 cri.go:89] found id: ""
	I1212 20:42:32.272483  202985 logs.go:282] 0 containers: []
	W1212 20:42:32.272506  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:32.272535  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:32.272642  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:32.311999  202985 cri.go:89] found id: ""
	I1212 20:42:32.312059  202985 logs.go:282] 0 containers: []
	W1212 20:42:32.312091  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:32.312135  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:32.312162  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:32.359252  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:32.359476  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:32.415013  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:32.415044  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:32.451036  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:32.451071  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:32.499188  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:32.499217  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:32.564840  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:32.564871  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:32.577792  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:32.577820  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:32.669020  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:32.669043  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:32.669057  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:32.737474  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:32.737511  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:35.293125  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:35.303121  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:35.303189  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:35.328028  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:35.328052  202985 cri.go:89] found id: ""
	I1212 20:42:35.328060  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:35.328115  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:35.331864  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:35.331950  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:35.356232  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:35.356255  202985 cri.go:89] found id: ""
	I1212 20:42:35.356263  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:35.356318  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:35.359628  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:35.359699  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:35.383805  202985 cri.go:89] found id: ""
	I1212 20:42:35.383829  202985 logs.go:282] 0 containers: []
	W1212 20:42:35.383882  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:35.383891  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:35.383955  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:35.409275  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:35.409294  202985 cri.go:89] found id: ""
	I1212 20:42:35.409302  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:35.409363  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:35.412885  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:35.412982  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:35.440933  202985 cri.go:89] found id: ""
	I1212 20:42:35.440955  202985 logs.go:282] 0 containers: []
	W1212 20:42:35.440964  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:35.440970  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:35.441036  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:35.464772  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:35.464793  202985 cri.go:89] found id: ""
	I1212 20:42:35.464801  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:35.464855  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:35.468409  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:35.468487  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:35.492519  202985 cri.go:89] found id: ""
	I1212 20:42:35.492545  202985 logs.go:282] 0 containers: []
	W1212 20:42:35.492554  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:35.492560  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:35.492655  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:35.519023  202985 cri.go:89] found id: ""
	I1212 20:42:35.519053  202985 logs.go:282] 0 containers: []
	W1212 20:42:35.519062  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:35.519074  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:35.519088  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:35.576156  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:35.576190  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:35.647136  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:35.647196  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:35.647222  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:35.695591  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:35.695674  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:35.769858  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:35.769891  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:35.858663  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:35.858732  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:35.873850  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:35.873884  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:35.923400  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:35.923427  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:35.971325  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:35.971371  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:38.526639  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:38.536774  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:38.536842  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:38.562405  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:38.562428  202985 cri.go:89] found id: ""
	I1212 20:42:38.562437  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:38.562499  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:38.566246  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:38.566314  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:38.590647  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:38.590667  202985 cri.go:89] found id: ""
	I1212 20:42:38.590675  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:38.590731  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:38.594385  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:38.594454  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:38.623299  202985 cri.go:89] found id: ""
	I1212 20:42:38.623368  202985 logs.go:282] 0 containers: []
	W1212 20:42:38.623389  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:38.623407  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:38.623494  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:38.648776  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:38.648797  202985 cri.go:89] found id: ""
	I1212 20:42:38.648806  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:38.648862  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:38.652691  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:38.652758  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:38.679232  202985 cri.go:89] found id: ""
	I1212 20:42:38.679258  202985 logs.go:282] 0 containers: []
	W1212 20:42:38.679268  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:38.679274  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:38.679333  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:38.713049  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:38.713074  202985 cri.go:89] found id: ""
	I1212 20:42:38.713082  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:38.713147  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:38.718090  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:38.718170  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:38.748342  202985 cri.go:89] found id: ""
	I1212 20:42:38.748373  202985 logs.go:282] 0 containers: []
	W1212 20:42:38.748383  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:38.748388  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:38.748456  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:38.775204  202985 cri.go:89] found id: ""
	I1212 20:42:38.775230  202985 logs.go:282] 0 containers: []
	W1212 20:42:38.775238  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:38.775251  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:38.775262  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:38.841367  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:38.841401  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:38.902247  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:38.902309  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:38.902328  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:38.944338  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:38.944372  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:38.956933  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:38.956963  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:39.002713  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:39.002744  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:39.040119  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:39.040151  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:39.073080  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:39.073113  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:39.102901  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:39.102934  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:41.648246  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:41.658033  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:41.658102  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:41.682427  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:41.682449  202985 cri.go:89] found id: ""
	I1212 20:42:41.682456  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:41.682512  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:41.686068  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:41.686134  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:41.719685  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:41.719707  202985 cri.go:89] found id: ""
	I1212 20:42:41.719715  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:41.719770  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:41.725822  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:41.725895  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:41.752182  202985 cri.go:89] found id: ""
	I1212 20:42:41.752207  202985 logs.go:282] 0 containers: []
	W1212 20:42:41.752215  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:41.752221  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:41.752278  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:41.781124  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:41.781147  202985 cri.go:89] found id: ""
	I1212 20:42:41.781156  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:41.781217  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:41.784974  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:41.785047  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:41.813187  202985 cri.go:89] found id: ""
	I1212 20:42:41.813210  202985 logs.go:282] 0 containers: []
	W1212 20:42:41.813218  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:41.813224  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:41.813289  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:41.837440  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:41.837459  202985 cri.go:89] found id: ""
	I1212 20:42:41.837467  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:41.837521  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:41.841232  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:41.841324  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:41.866317  202985 cri.go:89] found id: ""
	I1212 20:42:41.866339  202985 logs.go:282] 0 containers: []
	W1212 20:42:41.866348  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:41.866354  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:41.866412  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:41.892095  202985 cri.go:89] found id: ""
	I1212 20:42:41.892174  202985 logs.go:282] 0 containers: []
	W1212 20:42:41.892190  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:41.892204  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:41.892216  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:41.921291  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:41.921326  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:41.958005  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:41.958037  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:41.987357  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:41.987383  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:42.062004  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:42.062048  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:42.076723  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:42.076768  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:42.177736  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:42.177759  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:42.177774  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:42.220950  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:42.220994  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:42.263186  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:42.263224  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:44.806738  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:44.817794  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:44.817869  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:44.842363  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:44.842385  202985 cri.go:89] found id: ""
	I1212 20:42:44.842393  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:44.842447  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:44.846073  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:44.846151  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:44.870647  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:44.870667  202985 cri.go:89] found id: ""
	I1212 20:42:44.870675  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:44.870732  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:44.874383  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:44.874452  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:44.900202  202985 cri.go:89] found id: ""
	I1212 20:42:44.900228  202985 logs.go:282] 0 containers: []
	W1212 20:42:44.900237  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:44.900243  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:44.900301  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:44.925881  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:44.925902  202985 cri.go:89] found id: ""
	I1212 20:42:44.925911  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:44.925967  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:44.929721  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:44.929801  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:44.954860  202985 cri.go:89] found id: ""
	I1212 20:42:44.954884  202985 logs.go:282] 0 containers: []
	W1212 20:42:44.954894  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:44.954900  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:44.954962  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:44.980218  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:44.980241  202985 cri.go:89] found id: ""
	I1212 20:42:44.980249  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:44.980308  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:44.983973  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:44.984075  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:45.038803  202985 cri.go:89] found id: ""
	I1212 20:42:45.038829  202985 logs.go:282] 0 containers: []
	W1212 20:42:45.038838  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:45.038845  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:45.038928  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:45.076431  202985 cri.go:89] found id: ""
	I1212 20:42:45.076534  202985 logs.go:282] 0 containers: []
	W1212 20:42:45.076560  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:45.076603  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:45.076636  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:45.126496  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:45.126585  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:45.163232  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:45.163364  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:45.236386  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:45.236490  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:45.342739  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:45.342809  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:45.357214  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:45.357243  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:45.400633  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:45.400803  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:45.435469  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:45.435500  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:45.518702  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:45.518724  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:45.518739  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:48.059993  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:48.071254  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:48.071332  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:48.099377  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:48.099399  202985 cri.go:89] found id: ""
	I1212 20:42:48.099408  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:48.099475  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:48.103233  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:48.103305  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:48.130129  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:48.130160  202985 cri.go:89] found id: ""
	I1212 20:42:48.130170  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:48.130226  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:48.134297  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:48.134372  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:48.159052  202985 cri.go:89] found id: ""
	I1212 20:42:48.159077  202985 logs.go:282] 0 containers: []
	W1212 20:42:48.159086  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:48.159093  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:48.159148  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:48.185081  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:48.185103  202985 cri.go:89] found id: ""
	I1212 20:42:48.185111  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:48.185199  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:48.189074  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:48.189147  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:48.216995  202985 cri.go:89] found id: ""
	I1212 20:42:48.217018  202985 logs.go:282] 0 containers: []
	W1212 20:42:48.217027  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:48.217033  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:48.217091  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:48.248266  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:48.248288  202985 cri.go:89] found id: ""
	I1212 20:42:48.248296  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:48.248351  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:48.251883  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:48.251960  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:48.275801  202985 cri.go:89] found id: ""
	I1212 20:42:48.275827  202985 logs.go:282] 0 containers: []
	W1212 20:42:48.275864  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:48.275874  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:48.275941  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:48.301243  202985 cri.go:89] found id: ""
	I1212 20:42:48.301267  202985 logs.go:282] 0 containers: []
	W1212 20:42:48.301275  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:48.301296  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:48.301307  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:48.363855  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:48.363895  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:48.411702  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:48.411735  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:48.448620  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:48.448654  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:48.501086  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:48.501119  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:48.532205  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:48.532240  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:48.563130  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:48.563159  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:48.575565  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:48.575591  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:48.641188  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:48.641211  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:48.641226  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:51.180709  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:51.193240  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:51.193309  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:51.218081  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:51.218103  202985 cri.go:89] found id: ""
	I1212 20:42:51.218112  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:51.218170  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:51.222266  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:51.222340  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:51.250801  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:51.250821  202985 cri.go:89] found id: ""
	I1212 20:42:51.250830  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:51.250886  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:51.254721  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:51.254795  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:51.282618  202985 cri.go:89] found id: ""
	I1212 20:42:51.282642  202985 logs.go:282] 0 containers: []
	W1212 20:42:51.282651  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:51.282657  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:51.282719  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:51.307952  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:51.308024  202985 cri.go:89] found id: ""
	I1212 20:42:51.308040  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:51.308110  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:51.311775  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:51.311876  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:51.342975  202985 cri.go:89] found id: ""
	I1212 20:42:51.343046  202985 logs.go:282] 0 containers: []
	W1212 20:42:51.343082  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:51.343108  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:51.343196  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:51.370722  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:51.370783  202985 cri.go:89] found id: ""
	I1212 20:42:51.370813  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:51.370903  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:51.374912  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:51.374984  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:51.402934  202985 cri.go:89] found id: ""
	I1212 20:42:51.402958  202985 logs.go:282] 0 containers: []
	W1212 20:42:51.402967  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:51.402979  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:51.403040  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:51.433999  202985 cri.go:89] found id: ""
	I1212 20:42:51.434028  202985 logs.go:282] 0 containers: []
	W1212 20:42:51.434038  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:51.434053  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:51.434066  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:51.496565  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:51.496599  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:51.566320  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:51.566341  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:51.566353  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:51.599414  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:51.599443  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:51.633904  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:51.633937  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:51.674293  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:51.674321  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:51.687228  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:51.687255  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:51.731631  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:51.731669  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:51.764814  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:51.764844  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:54.296425  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:54.306358  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:54.306424  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:54.331201  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:54.331219  202985 cri.go:89] found id: ""
	I1212 20:42:54.331227  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:54.331281  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:54.335109  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:54.335182  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:54.362427  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:54.362447  202985 cri.go:89] found id: ""
	I1212 20:42:54.362456  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:54.362509  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:54.366162  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:54.366230  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:54.395446  202985 cri.go:89] found id: ""
	I1212 20:42:54.395468  202985 logs.go:282] 0 containers: []
	W1212 20:42:54.395476  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:54.395482  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:54.395541  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:54.422215  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:54.422238  202985 cri.go:89] found id: ""
	I1212 20:42:54.422248  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:54.422305  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:54.425982  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:54.426055  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:54.456896  202985 cri.go:89] found id: ""
	I1212 20:42:54.456920  202985 logs.go:282] 0 containers: []
	W1212 20:42:54.456929  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:54.456935  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:54.456992  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:54.485367  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:54.485391  202985 cri.go:89] found id: ""
	I1212 20:42:54.485404  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:54.485491  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:54.490841  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:54.490915  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:54.517272  202985 cri.go:89] found id: ""
	I1212 20:42:54.517296  202985 logs.go:282] 0 containers: []
	W1212 20:42:54.517305  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:54.517312  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:54.517370  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:54.544104  202985 cri.go:89] found id: ""
	I1212 20:42:54.544126  202985 logs.go:282] 0 containers: []
	W1212 20:42:54.544135  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:54.544150  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:54.544162  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:54.582141  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:54.582171  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:54.645250  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:54.645293  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:54.683522  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:54.683555  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:54.718402  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:54.718453  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:54.761744  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:54.761779  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:54.790482  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:54.790516  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:42:54.818153  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:54.818179  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:54.830773  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:54.830797  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:54.898737  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:57.399701  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:42:57.410757  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:42:57.410831  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:42:57.442909  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:57.442928  202985 cri.go:89] found id: ""
	I1212 20:42:57.442937  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:42:57.443009  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:57.448601  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:42:57.448683  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:42:57.485394  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:57.485413  202985 cri.go:89] found id: ""
	I1212 20:42:57.485421  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:42:57.485474  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:57.491755  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:42:57.491824  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:42:57.521176  202985 cri.go:89] found id: ""
	I1212 20:42:57.521251  202985 logs.go:282] 0 containers: []
	W1212 20:42:57.521270  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:42:57.521277  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:42:57.521346  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:42:57.550854  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:57.550880  202985 cri.go:89] found id: ""
	I1212 20:42:57.550889  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:42:57.550945  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:57.554629  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:42:57.554701  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:42:57.579948  202985 cri.go:89] found id: ""
	I1212 20:42:57.580021  202985 logs.go:282] 0 containers: []
	W1212 20:42:57.580043  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:42:57.580064  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:42:57.580155  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:42:57.604868  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:57.604889  202985 cri.go:89] found id: ""
	I1212 20:42:57.604898  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:42:57.604956  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:42:57.608589  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:42:57.608657  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:42:57.634002  202985 cri.go:89] found id: ""
	I1212 20:42:57.634026  202985 logs.go:282] 0 containers: []
	W1212 20:42:57.634035  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:42:57.634041  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:42:57.634098  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:42:57.661894  202985 cri.go:89] found id: ""
	I1212 20:42:57.661916  202985 logs.go:282] 0 containers: []
	W1212 20:42:57.661924  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:42:57.661939  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:42:57.661950  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:42:57.721740  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:42:57.721813  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:42:57.787212  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:42:57.787230  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:42:57.787244  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:42:57.820153  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:42:57.820183  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:42:57.854090  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:42:57.854117  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:42:57.886662  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:42:57.886690  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:42:57.916115  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:42:57.916148  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:42:57.928721  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:42:57.928754  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:42:57.968270  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:42:57.968305  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:00.503991  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:00.515544  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:00.515617  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:00.550948  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:00.550966  202985 cri.go:89] found id: ""
	I1212 20:43:00.550974  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:00.551038  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:00.555606  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:00.555681  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:00.596411  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:00.596434  202985 cri.go:89] found id: ""
	I1212 20:43:00.596443  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:00.596506  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:00.600926  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:00.600995  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:00.642137  202985 cri.go:89] found id: ""
	I1212 20:43:00.642166  202985 logs.go:282] 0 containers: []
	W1212 20:43:00.642174  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:00.642180  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:00.642237  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:00.689903  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:00.689925  202985 cri.go:89] found id: ""
	I1212 20:43:00.689934  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:00.689986  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:00.693977  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:00.694046  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:00.735321  202985 cri.go:89] found id: ""
	I1212 20:43:00.735344  202985 logs.go:282] 0 containers: []
	W1212 20:43:00.735353  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:00.735359  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:00.735417  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:00.771826  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:00.771863  202985 cri.go:89] found id: ""
	I1212 20:43:00.771871  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:00.771933  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:00.776246  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:00.776314  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:00.816323  202985 cri.go:89] found id: ""
	I1212 20:43:00.816347  202985 logs.go:282] 0 containers: []
	W1212 20:43:00.816355  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:00.816361  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:00.816419  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:00.851557  202985 cri.go:89] found id: ""
	I1212 20:43:00.851582  202985 logs.go:282] 0 containers: []
	W1212 20:43:00.851591  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:00.851607  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:00.851619  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:00.894891  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:00.894923  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:00.941576  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:00.941611  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:00.986909  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:00.986941  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:01.021635  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:01.021672  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:01.051366  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:01.051393  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:01.110247  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:01.110281  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:01.123934  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:01.123963  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:01.189082  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:01.189102  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:01.189115  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:03.735149  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:03.745035  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:03.745098  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:03.771607  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:03.771626  202985 cri.go:89] found id: ""
	I1212 20:43:03.771634  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:03.771686  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:03.775749  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:03.775822  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:03.812724  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:03.812742  202985 cri.go:89] found id: ""
	I1212 20:43:03.812750  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:03.812811  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:03.817009  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:03.817076  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:03.856187  202985 cri.go:89] found id: ""
	I1212 20:43:03.856258  202985 logs.go:282] 0 containers: []
	W1212 20:43:03.856281  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:03.856298  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:03.856379  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:03.885524  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:03.885592  202985 cri.go:89] found id: ""
	I1212 20:43:03.885614  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:03.885706  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:03.892381  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:03.892497  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:03.920677  202985 cri.go:89] found id: ""
	I1212 20:43:03.920747  202985 logs.go:282] 0 containers: []
	W1212 20:43:03.920779  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:03.920798  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:03.920877  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:03.976148  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:03.976172  202985 cri.go:89] found id: ""
	I1212 20:43:03.976186  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:03.976248  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:03.981495  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:03.981563  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:04.024468  202985 cri.go:89] found id: ""
	I1212 20:43:04.024489  202985 logs.go:282] 0 containers: []
	W1212 20:43:04.024497  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:04.024503  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:04.024563  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:04.060075  202985 cri.go:89] found id: ""
	I1212 20:43:04.060098  202985 logs.go:282] 0 containers: []
	W1212 20:43:04.060106  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:04.060120  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:04.060130  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:04.142700  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:04.142741  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:04.246914  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:04.246935  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:04.246947  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:04.292157  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:04.292184  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:04.319281  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:04.319310  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:04.374092  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:04.374124  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:04.442084  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:04.442121  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:04.486536  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:04.486571  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:04.528368  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:04.528453  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:07.061254  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:07.071089  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:07.071161  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:07.096190  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:07.096209  202985 cri.go:89] found id: ""
	I1212 20:43:07.096217  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:07.096273  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:07.099882  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:07.099964  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:07.126163  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:07.126186  202985 cri.go:89] found id: ""
	I1212 20:43:07.126193  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:07.126249  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:07.129745  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:07.129816  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:07.159533  202985 cri.go:89] found id: ""
	I1212 20:43:07.159558  202985 logs.go:282] 0 containers: []
	W1212 20:43:07.159566  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:07.159572  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:07.159632  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:07.191569  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:07.191592  202985 cri.go:89] found id: ""
	I1212 20:43:07.191600  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:07.191655  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:07.195736  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:07.195804  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:07.286048  202985 cri.go:89] found id: ""
	I1212 20:43:07.286072  202985 logs.go:282] 0 containers: []
	W1212 20:43:07.286081  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:07.286087  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:07.286154  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:07.321080  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:07.321101  202985 cri.go:89] found id: ""
	I1212 20:43:07.321109  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:07.321172  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:07.325474  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:07.325543  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:07.358146  202985 cri.go:89] found id: ""
	I1212 20:43:07.358170  202985 logs.go:282] 0 containers: []
	W1212 20:43:07.358178  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:07.358185  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:07.358248  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:07.395970  202985 cri.go:89] found id: ""
	I1212 20:43:07.395996  202985 logs.go:282] 0 containers: []
	W1212 20:43:07.396004  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:07.396020  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:07.396030  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:07.465250  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:07.465325  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:07.560393  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:07.560410  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:07.560425  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:07.595091  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:07.595167  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:07.632448  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:07.632516  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:07.666057  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:07.666082  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:07.682220  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:07.682288  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:07.730439  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:07.730510  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:07.787078  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:07.787153  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:10.341335  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:10.351913  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:10.351992  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:10.382102  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:10.382125  202985 cri.go:89] found id: ""
	I1212 20:43:10.382133  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:10.382190  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:10.386167  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:10.386241  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:10.416182  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:10.416205  202985 cri.go:89] found id: ""
	I1212 20:43:10.416213  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:10.416299  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:10.420439  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:10.420511  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:10.455431  202985 cri.go:89] found id: ""
	I1212 20:43:10.455457  202985 logs.go:282] 0 containers: []
	W1212 20:43:10.455467  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:10.455473  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:10.455534  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:10.484224  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:10.484247  202985 cri.go:89] found id: ""
	I1212 20:43:10.484255  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:10.484310  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:10.487911  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:10.487976  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:10.517602  202985 cri.go:89] found id: ""
	I1212 20:43:10.517627  202985 logs.go:282] 0 containers: []
	W1212 20:43:10.517635  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:10.517642  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:10.517701  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:10.543713  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:10.543735  202985 cri.go:89] found id: ""
	I1212 20:43:10.543743  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:10.543796  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:10.547354  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:10.547408  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:10.580710  202985 cri.go:89] found id: ""
	I1212 20:43:10.580735  202985 logs.go:282] 0 containers: []
	W1212 20:43:10.580743  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:10.580748  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:10.580808  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:10.606962  202985 cri.go:89] found id: ""
	I1212 20:43:10.606997  202985 logs.go:282] 0 containers: []
	W1212 20:43:10.607005  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:10.607019  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:10.607031  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:10.653258  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:10.653292  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:10.706157  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:10.706193  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:10.759615  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:10.759653  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:10.794725  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:10.794806  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:10.869498  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:10.869546  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:10.885455  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:10.885484  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:10.978618  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:10.978641  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:10.978653  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:11.079659  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:11.079702  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:13.612045  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:13.621947  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:13.622015  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:13.646810  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:13.646829  202985 cri.go:89] found id: ""
	I1212 20:43:13.646836  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:13.646890  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:13.650508  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:13.650580  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:13.678925  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:13.678945  202985 cri.go:89] found id: ""
	I1212 20:43:13.678954  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:13.679012  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:13.682516  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:13.682591  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:13.708973  202985 cri.go:89] found id: ""
	I1212 20:43:13.709072  202985 logs.go:282] 0 containers: []
	W1212 20:43:13.709081  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:13.709087  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:13.709144  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:13.739324  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:13.739342  202985 cri.go:89] found id: ""
	I1212 20:43:13.739350  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:13.739406  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:13.742934  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:13.743046  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:13.771722  202985 cri.go:89] found id: ""
	I1212 20:43:13.771744  202985 logs.go:282] 0 containers: []
	W1212 20:43:13.771753  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:13.771759  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:13.771814  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:13.802158  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:13.802180  202985 cri.go:89] found id: ""
	I1212 20:43:13.802188  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:13.802246  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:13.805878  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:13.805954  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:13.830088  202985 cri.go:89] found id: ""
	I1212 20:43:13.830112  202985 logs.go:282] 0 containers: []
	W1212 20:43:13.830121  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:13.830129  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:13.830186  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:13.859524  202985 cri.go:89] found id: ""
	I1212 20:43:13.859554  202985 logs.go:282] 0 containers: []
	W1212 20:43:13.859563  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:13.859577  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:13.859589  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:13.896380  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:13.896419  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:13.936559  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:13.936592  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:13.967327  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:13.967357  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:14.012909  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:14.012938  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:14.083944  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:14.083964  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:14.083978  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:14.124403  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:14.124436  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:14.184685  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:14.184720  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:14.197617  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:14.197643  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:16.735784  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:16.745963  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:16.746031  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:16.770891  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:16.770914  202985 cri.go:89] found id: ""
	I1212 20:43:16.770923  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:16.770980  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:16.774675  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:16.774747  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:16.799340  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:16.799362  202985 cri.go:89] found id: ""
	I1212 20:43:16.799371  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:16.799431  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:16.803185  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:16.803275  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:16.827690  202985 cri.go:89] found id: ""
	I1212 20:43:16.827713  202985 logs.go:282] 0 containers: []
	W1212 20:43:16.827721  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:16.827746  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:16.827808  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:16.852237  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:16.852295  202985 cri.go:89] found id: ""
	I1212 20:43:16.852315  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:16.852383  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:16.856019  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:16.856142  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:16.882534  202985 cri.go:89] found id: ""
	I1212 20:43:16.882571  202985 logs.go:282] 0 containers: []
	W1212 20:43:16.882581  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:16.882587  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:16.882647  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:16.909883  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:16.909904  202985 cri.go:89] found id: ""
	I1212 20:43:16.909912  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:16.909970  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:16.913590  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:16.913692  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:16.937283  202985 cri.go:89] found id: ""
	I1212 20:43:16.937349  202985 logs.go:282] 0 containers: []
	W1212 20:43:16.937362  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:16.937369  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:16.937534  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:16.963021  202985 cri.go:89] found id: ""
	I1212 20:43:16.963055  202985 logs.go:282] 0 containers: []
	W1212 20:43:16.963064  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:16.963078  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:16.963089  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:17.027485  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:17.027518  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:17.092534  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:17.092555  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:17.092568  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:17.127524  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:17.127555  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:17.159593  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:17.159623  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:17.189050  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:17.189085  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:17.219738  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:17.219770  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:17.232453  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:17.232489  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:17.267176  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:17.267207  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:19.802891  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:19.812992  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:19.813057  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:19.840463  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:19.840486  202985 cri.go:89] found id: ""
	I1212 20:43:19.840495  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:19.840572  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:19.844113  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:19.844178  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:19.869915  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:19.869937  202985 cri.go:89] found id: ""
	I1212 20:43:19.869945  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:19.869999  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:19.874660  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:19.874746  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:19.905271  202985 cri.go:89] found id: ""
	I1212 20:43:19.905297  202985 logs.go:282] 0 containers: []
	W1212 20:43:19.905330  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:19.905338  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:19.905409  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:19.940283  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:19.940305  202985 cri.go:89] found id: ""
	I1212 20:43:19.940313  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:19.940369  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:19.944379  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:19.944448  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:19.970052  202985 cri.go:89] found id: ""
	I1212 20:43:19.970078  202985 logs.go:282] 0 containers: []
	W1212 20:43:19.970087  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:19.970093  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:19.970149  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:19.998920  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:19.998942  202985 cri.go:89] found id: ""
	I1212 20:43:19.998950  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:19.999004  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:20.007536  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:20.007619  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:20.037537  202985 cri.go:89] found id: ""
	I1212 20:43:20.037607  202985 logs.go:282] 0 containers: []
	W1212 20:43:20.037628  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:20.037640  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:20.037722  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:20.064319  202985 cri.go:89] found id: ""
	I1212 20:43:20.064344  202985 logs.go:282] 0 containers: []
	W1212 20:43:20.064353  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:20.064366  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:20.064379  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:20.100518  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:20.100551  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:20.132923  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:20.132953  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:20.165898  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:20.165929  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:20.224152  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:20.224187  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:20.236367  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:20.236394  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:20.304515  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:20.304536  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:20.304550  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:20.337508  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:20.337539  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:20.370938  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:20.370970  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:22.901030  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:22.910757  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:22.910827  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:22.935957  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:22.935980  202985 cri.go:89] found id: ""
	I1212 20:43:22.935989  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:22.936046  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:22.940277  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:22.940348  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:22.979432  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:22.979451  202985 cri.go:89] found id: ""
	I1212 20:43:22.979459  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:22.979511  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:22.983646  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:22.983710  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:23.013650  202985 cri.go:89] found id: ""
	I1212 20:43:23.013672  202985 logs.go:282] 0 containers: []
	W1212 20:43:23.013680  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:23.013686  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:23.013744  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:23.039201  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:23.039223  202985 cri.go:89] found id: ""
	I1212 20:43:23.039231  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:23.039287  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:23.042790  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:23.042859  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:23.067698  202985 cri.go:89] found id: ""
	I1212 20:43:23.067719  202985 logs.go:282] 0 containers: []
	W1212 20:43:23.067727  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:23.067733  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:23.067826  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:23.093511  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:23.093571  202985 cri.go:89] found id: ""
	I1212 20:43:23.093593  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:23.093654  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:23.097133  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:23.097198  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:23.121560  202985 cri.go:89] found id: ""
	I1212 20:43:23.121584  202985 logs.go:282] 0 containers: []
	W1212 20:43:23.121593  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:23.121599  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:23.121661  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:23.145583  202985 cri.go:89] found id: ""
	I1212 20:43:23.145605  202985 logs.go:282] 0 containers: []
	W1212 20:43:23.145612  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:23.145626  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:23.145638  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:23.207190  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:23.207262  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:23.207289  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:23.242667  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:23.242699  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:23.276996  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:23.277025  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:23.305094  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:23.305127  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:23.333079  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:23.333105  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:23.391391  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:23.391423  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:23.404134  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:23.404164  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:23.442982  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:23.443011  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:25.975973  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:25.988196  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:25.988294  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:26.020646  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:26.020669  202985 cri.go:89] found id: ""
	I1212 20:43:26.020678  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:26.020756  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:26.024589  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:26.024675  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:26.051649  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:26.051674  202985 cri.go:89] found id: ""
	I1212 20:43:26.051683  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:26.051749  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:26.055809  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:26.055939  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:26.081644  202985 cri.go:89] found id: ""
	I1212 20:43:26.081669  202985 logs.go:282] 0 containers: []
	W1212 20:43:26.081677  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:26.081683  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:26.081744  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:26.108675  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:26.108699  202985 cri.go:89] found id: ""
	I1212 20:43:26.108708  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:26.108771  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:26.112757  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:26.112857  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:26.142911  202985 cri.go:89] found id: ""
	I1212 20:43:26.142944  202985 logs.go:282] 0 containers: []
	W1212 20:43:26.142953  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:26.142959  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:26.143026  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:26.168456  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:26.168477  202985 cri.go:89] found id: ""
	I1212 20:43:26.168485  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:26.168558  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:26.172248  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:26.172359  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:26.196717  202985 cri.go:89] found id: ""
	I1212 20:43:26.196743  202985 logs.go:282] 0 containers: []
	W1212 20:43:26.196752  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:26.196758  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:26.196823  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:26.221671  202985 cri.go:89] found id: ""
	I1212 20:43:26.221693  202985 logs.go:282] 0 containers: []
	W1212 20:43:26.221702  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:26.221718  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:26.221731  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:26.261518  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:26.261549  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:26.295719  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:26.295752  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:26.325146  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:26.325183  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:26.364543  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:26.364574  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:26.433122  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:26.433160  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:26.447448  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:26.447472  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:26.527467  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:26.527484  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:26.527496  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:26.564945  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:26.565005  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:29.131974  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:29.143082  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:29.143156  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:29.169283  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:29.169311  202985 cri.go:89] found id: ""
	I1212 20:43:29.169321  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:29.169375  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:29.173122  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:29.173201  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:29.204315  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:29.204337  202985 cri.go:89] found id: ""
	I1212 20:43:29.204345  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:29.204400  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:29.207884  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:29.207961  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:29.232258  202985 cri.go:89] found id: ""
	I1212 20:43:29.232281  202985 logs.go:282] 0 containers: []
	W1212 20:43:29.232289  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:29.232295  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:29.232351  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:29.257699  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:29.257723  202985 cri.go:89] found id: ""
	I1212 20:43:29.257731  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:29.257788  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:29.261450  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:29.261529  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:29.287064  202985 cri.go:89] found id: ""
	I1212 20:43:29.287089  202985 logs.go:282] 0 containers: []
	W1212 20:43:29.287098  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:29.287104  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:29.287161  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:29.313238  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:29.313259  202985 cri.go:89] found id: ""
	I1212 20:43:29.313282  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:29.313338  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:29.317042  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:29.317141  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:29.341488  202985 cri.go:89] found id: ""
	I1212 20:43:29.341510  202985 logs.go:282] 0 containers: []
	W1212 20:43:29.341519  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:29.341524  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:29.341582  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:29.368861  202985 cri.go:89] found id: ""
	I1212 20:43:29.368942  202985 logs.go:282] 0 containers: []
	W1212 20:43:29.368954  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:29.368971  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:29.368984  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:29.433152  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:29.433221  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:29.433241  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:29.483393  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:29.483426  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:29.518834  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:29.518863  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:29.552845  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:29.552878  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:29.582418  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:29.582452  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:29.642283  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:29.642321  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:29.655018  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:29.655046  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:29.688092  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:29.688122  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:32.226125  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:32.238808  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:32.239114  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:32.274516  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:32.274535  202985 cri.go:89] found id: ""
	I1212 20:43:32.274543  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:32.274613  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:32.278964  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:32.279038  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:32.313185  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:32.313202  202985 cri.go:89] found id: ""
	I1212 20:43:32.313210  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:32.313265  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:32.316747  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:32.316811  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:32.341926  202985 cri.go:89] found id: ""
	I1212 20:43:32.341951  202985 logs.go:282] 0 containers: []
	W1212 20:43:32.341969  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:32.341975  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:32.342034  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:32.365901  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:32.365920  202985 cri.go:89] found id: ""
	I1212 20:43:32.365928  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:32.365986  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:32.369637  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:32.369728  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:32.394644  202985 cri.go:89] found id: ""
	I1212 20:43:32.394669  202985 logs.go:282] 0 containers: []
	W1212 20:43:32.394678  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:32.394684  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:32.394769  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:32.420604  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:32.420627  202985 cri.go:89] found id: ""
	I1212 20:43:32.420635  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:32.420713  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:32.424445  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:32.424542  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:32.454555  202985 cri.go:89] found id: ""
	I1212 20:43:32.454581  202985 logs.go:282] 0 containers: []
	W1212 20:43:32.454590  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:32.454596  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:32.454656  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:32.479438  202985 cri.go:89] found id: ""
	I1212 20:43:32.479460  202985 logs.go:282] 0 containers: []
	W1212 20:43:32.479468  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:32.479484  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:32.479497  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:32.532036  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:32.532065  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:32.578066  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:32.578100  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:32.608179  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:32.608211  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:32.670845  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:32.670976  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:32.670997  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:32.709263  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:32.709342  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:32.752369  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:32.752442  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:32.785025  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:32.785063  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:32.845682  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:32.845715  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:35.359960  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:35.371040  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:35.371104  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:35.398920  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:35.398938  202985 cri.go:89] found id: ""
	I1212 20:43:35.398946  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:35.398999  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:35.403054  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:35.403119  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:35.438384  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:35.438402  202985 cri.go:89] found id: ""
	I1212 20:43:35.438410  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:35.438463  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:35.442519  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:35.442586  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:35.470629  202985 cri.go:89] found id: ""
	I1212 20:43:35.470688  202985 logs.go:282] 0 containers: []
	W1212 20:43:35.470719  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:35.470740  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:35.470848  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:35.499104  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:35.499172  202985 cri.go:89] found id: ""
	I1212 20:43:35.499194  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:35.499279  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:35.503312  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:35.503430  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:35.530711  202985 cri.go:89] found id: ""
	I1212 20:43:35.530786  202985 logs.go:282] 0 containers: []
	W1212 20:43:35.530810  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:35.530835  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:35.530967  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:35.562329  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:35.562403  202985 cri.go:89] found id: ""
	I1212 20:43:35.562425  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:35.562511  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:35.566825  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:35.567012  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:35.600761  202985 cri.go:89] found id: ""
	I1212 20:43:35.600782  202985 logs.go:282] 0 containers: []
	W1212 20:43:35.600790  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:35.600796  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:35.600853  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:35.632805  202985 cri.go:89] found id: ""
	I1212 20:43:35.632825  202985 logs.go:282] 0 containers: []
	W1212 20:43:35.632834  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:35.632849  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:35.632865  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:35.679216  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:35.679300  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:35.767289  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:35.767385  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:35.830036  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:35.830104  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:35.845595  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:35.845666  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:35.890181  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:35.890252  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:35.922126  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:35.922160  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:35.955055  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:35.955088  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:36.015026  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:36.015065  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:36.088388  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:38.589090  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:38.599816  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:38.599918  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:38.637248  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:38.637282  202985 cri.go:89] found id: ""
	I1212 20:43:38.637290  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:38.637346  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:38.641311  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:38.641379  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:38.685997  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:38.686016  202985 cri.go:89] found id: ""
	I1212 20:43:38.686024  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:38.686081  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:38.690417  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:38.690485  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:38.733674  202985 cri.go:89] found id: ""
	I1212 20:43:38.733699  202985 logs.go:282] 0 containers: []
	W1212 20:43:38.733708  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:38.733716  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:38.733774  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:38.778571  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:38.778589  202985 cri.go:89] found id: ""
	I1212 20:43:38.778597  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:38.778655  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:38.783107  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:38.783173  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:38.838052  202985 cri.go:89] found id: ""
	I1212 20:43:38.838072  202985 logs.go:282] 0 containers: []
	W1212 20:43:38.838080  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:38.838086  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:38.838146  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:38.875749  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:38.875782  202985 cri.go:89] found id: ""
	I1212 20:43:38.875790  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:38.875975  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:38.882493  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:38.882573  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:38.920436  202985 cri.go:89] found id: ""
	I1212 20:43:38.920460  202985 logs.go:282] 0 containers: []
	W1212 20:43:38.920469  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:38.920475  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:38.920540  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:38.970128  202985 cri.go:89] found id: ""
	I1212 20:43:38.970150  202985 logs.go:282] 0 containers: []
	W1212 20:43:38.970211  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:38.970228  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:38.970268  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:39.021067  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:39.021146  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:39.068188  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:39.068221  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:39.087821  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:39.087886  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:39.127925  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:39.127997  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:39.174949  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:39.175034  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:39.216820  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:39.216887  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:39.248961  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:39.249035  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:39.313283  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:39.313360  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:39.407186  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:41.907470  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:41.917514  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:41.917583  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:41.942553  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:41.942572  202985 cri.go:89] found id: ""
	I1212 20:43:41.942581  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:41.942635  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:41.946037  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:41.946111  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:41.971802  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:41.971821  202985 cri.go:89] found id: ""
	I1212 20:43:41.971830  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:41.971925  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:41.975472  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:41.975538  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:41.999082  202985 cri.go:89] found id: ""
	I1212 20:43:41.999103  202985 logs.go:282] 0 containers: []
	W1212 20:43:41.999111  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:41.999117  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:41.999181  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:42.053199  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:42.053219  202985 cri.go:89] found id: ""
	I1212 20:43:42.053228  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:42.053284  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:42.057183  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:42.057263  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:42.105231  202985 cri.go:89] found id: ""
	I1212 20:43:42.105256  202985 logs.go:282] 0 containers: []
	W1212 20:43:42.105267  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:42.105274  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:42.105345  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:42.148486  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:42.148510  202985 cri.go:89] found id: ""
	I1212 20:43:42.148520  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:42.148601  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:42.154078  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:42.154240  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:42.243725  202985 cri.go:89] found id: ""
	I1212 20:43:42.243756  202985 logs.go:282] 0 containers: []
	W1212 20:43:42.243768  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:42.243776  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:42.243865  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:42.348691  202985 cri.go:89] found id: ""
	I1212 20:43:42.348713  202985 logs.go:282] 0 containers: []
	W1212 20:43:42.348721  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:42.348736  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:42.348749  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:42.366830  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:42.366858  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:42.462838  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:42.462863  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:42.462876  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:42.526714  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:42.526746  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:42.591728  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:42.591763  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:42.624694  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:42.624728  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:42.691581  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:42.691617  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:42.734966  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:42.734998  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:42.772477  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:42.772518  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:45.323305  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:45.333076  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:45.333141  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:45.363483  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:45.363503  202985 cri.go:89] found id: ""
	I1212 20:43:45.363512  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:45.363573  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:45.367646  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:45.367732  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:45.393093  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:45.393113  202985 cri.go:89] found id: ""
	I1212 20:43:45.393121  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:45.393178  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:45.396719  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:45.396790  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:45.423403  202985 cri.go:89] found id: ""
	I1212 20:43:45.423424  202985 logs.go:282] 0 containers: []
	W1212 20:43:45.423433  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:45.423441  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:45.423508  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:45.454137  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:45.454155  202985 cri.go:89] found id: ""
	I1212 20:43:45.454164  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:45.454219  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:45.458821  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:45.458945  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:45.489954  202985 cri.go:89] found id: ""
	I1212 20:43:45.489976  202985 logs.go:282] 0 containers: []
	W1212 20:43:45.489984  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:45.489990  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:45.490049  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:45.517918  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:45.517983  202985 cri.go:89] found id: ""
	I1212 20:43:45.518003  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:45.518083  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:45.521833  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:45.521915  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:45.546294  202985 cri.go:89] found id: ""
	I1212 20:43:45.546319  202985 logs.go:282] 0 containers: []
	W1212 20:43:45.546327  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:45.546343  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:45.546415  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:45.571525  202985 cri.go:89] found id: ""
	I1212 20:43:45.571604  202985 logs.go:282] 0 containers: []
	W1212 20:43:45.571626  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:45.571666  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:45.571694  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:45.605152  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:45.605187  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:45.633804  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:45.633831  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:45.695991  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:45.696059  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:45.696088  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:45.738576  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:45.738607  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:45.772544  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:45.772571  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:45.807818  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:45.807861  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:45.835138  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:45.835169  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:45.899672  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:45.899710  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:48.414362  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:48.424419  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:48.424487  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:48.458227  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:48.458246  202985 cri.go:89] found id: ""
	I1212 20:43:48.458254  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:48.458306  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:48.463723  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:48.463819  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:48.494898  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:48.494917  202985 cri.go:89] found id: ""
	I1212 20:43:48.494925  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:48.494981  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:48.499735  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:48.499802  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:48.527625  202985 cri.go:89] found id: ""
	I1212 20:43:48.527705  202985 logs.go:282] 0 containers: []
	W1212 20:43:48.527726  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:48.527744  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:48.527892  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:48.552036  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:48.552059  202985 cri.go:89] found id: ""
	I1212 20:43:48.552067  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:48.552122  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:48.555459  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:48.555534  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:48.580827  202985 cri.go:89] found id: ""
	I1212 20:43:48.580848  202985 logs.go:282] 0 containers: []
	W1212 20:43:48.580857  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:48.580863  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:48.580918  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:48.605436  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:48.605459  202985 cri.go:89] found id: ""
	I1212 20:43:48.605468  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:48.605544  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:48.609180  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:48.609251  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:48.633034  202985 cri.go:89] found id: ""
	I1212 20:43:48.633057  202985 logs.go:282] 0 containers: []
	W1212 20:43:48.633066  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:48.633072  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:48.633129  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:48.657773  202985 cri.go:89] found id: ""
	I1212 20:43:48.657854  202985 logs.go:282] 0 containers: []
	W1212 20:43:48.657870  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:48.657885  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:48.657898  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:48.692997  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:48.693024  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:48.728686  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:48.728710  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:48.784876  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:48.784906  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:48.821003  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:48.821036  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:48.855073  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:48.855102  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:48.884558  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:48.884591  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:48.896948  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:48.896977  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:48.958373  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:48.958392  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:48.958405  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:51.495979  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:51.505694  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:51.505771  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:51.532485  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:51.532506  202985 cri.go:89] found id: ""
	I1212 20:43:51.532520  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:51.532576  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:51.536251  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:51.536320  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:51.562431  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:51.562451  202985 cri.go:89] found id: ""
	I1212 20:43:51.562458  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:51.562517  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:51.566095  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:51.566160  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:51.590724  202985 cri.go:89] found id: ""
	I1212 20:43:51.590748  202985 logs.go:282] 0 containers: []
	W1212 20:43:51.590757  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:51.590763  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:51.590820  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:51.617614  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:51.617635  202985 cri.go:89] found id: ""
	I1212 20:43:51.617644  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:51.617699  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:51.621310  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:51.621398  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:51.645742  202985 cri.go:89] found id: ""
	I1212 20:43:51.645765  202985 logs.go:282] 0 containers: []
	W1212 20:43:51.645774  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:51.645781  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:51.645842  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:51.670719  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:51.670740  202985 cri.go:89] found id: ""
	I1212 20:43:51.670748  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:51.670806  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:51.675054  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:51.675135  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:51.700027  202985 cri.go:89] found id: ""
	I1212 20:43:51.700048  202985 logs.go:282] 0 containers: []
	W1212 20:43:51.700056  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:51.700062  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:51.700125  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:51.729479  202985 cri.go:89] found id: ""
	I1212 20:43:51.729500  202985 logs.go:282] 0 containers: []
	W1212 20:43:51.729509  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:51.729523  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:51.729535  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:51.786113  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:51.786147  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:51.820600  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:51.820631  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:51.865929  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:51.865964  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:51.896448  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:51.896478  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:51.909366  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:51.909394  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:51.973814  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:51.973832  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:51.973856  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:52.009571  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:52.009619  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:52.046788  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:52.046822  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:54.577748  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:54.588645  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:54.588717  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:54.614445  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:54.614469  202985 cri.go:89] found id: ""
	I1212 20:43:54.614477  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:54.614533  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:54.618326  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:54.618403  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:54.646209  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:54.646232  202985 cri.go:89] found id: ""
	I1212 20:43:54.646240  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:54.646295  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:54.649846  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:54.649912  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:54.674793  202985 cri.go:89] found id: ""
	I1212 20:43:54.674815  202985 logs.go:282] 0 containers: []
	W1212 20:43:54.674824  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:54.674830  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:54.674892  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:54.701644  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:54.701667  202985 cri.go:89] found id: ""
	I1212 20:43:54.701677  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:54.701740  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:54.705420  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:54.705493  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:54.733508  202985 cri.go:89] found id: ""
	I1212 20:43:54.733529  202985 logs.go:282] 0 containers: []
	W1212 20:43:54.733537  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:54.733543  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:54.733598  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:54.762822  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:54.762846  202985 cri.go:89] found id: ""
	I1212 20:43:54.762854  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:54.762916  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:54.766693  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:54.766793  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:54.796197  202985 cri.go:89] found id: ""
	I1212 20:43:54.796221  202985 logs.go:282] 0 containers: []
	W1212 20:43:54.796230  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:54.796236  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:54.796335  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:54.826188  202985 cri.go:89] found id: ""
	I1212 20:43:54.826210  202985 logs.go:282] 0 containers: []
	W1212 20:43:54.826218  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:54.826232  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:54.826251  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:54.862474  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:54.862506  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:54.901793  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:54.901828  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:54.929759  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:54.929792  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:43:54.958003  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:54.958031  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:55.019163  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:55.019248  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:55.032796  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:55.032832  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:55.097275  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:55.097297  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:55.097311  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:55.133269  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:55.133304  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:57.667629  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:43:57.677496  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:43:57.677564  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:43:57.700990  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:57.701019  202985 cri.go:89] found id: ""
	I1212 20:43:57.701027  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:43:57.701081  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:57.704499  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:43:57.704563  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:43:57.736215  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:57.736234  202985 cri.go:89] found id: ""
	I1212 20:43:57.736242  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:43:57.736297  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:57.740101  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:43:57.740174  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:43:57.769726  202985 cri.go:89] found id: ""
	I1212 20:43:57.769747  202985 logs.go:282] 0 containers: []
	W1212 20:43:57.769756  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:43:57.769762  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:43:57.769831  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:43:57.794820  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:57.794902  202985 cri.go:89] found id: ""
	I1212 20:43:57.794926  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:43:57.795005  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:57.798650  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:43:57.798717  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:43:57.823189  202985 cri.go:89] found id: ""
	I1212 20:43:57.823211  202985 logs.go:282] 0 containers: []
	W1212 20:43:57.823220  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:43:57.823226  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:43:57.823283  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:43:57.851569  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:57.851654  202985 cri.go:89] found id: ""
	I1212 20:43:57.851690  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:43:57.851791  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:43:57.855576  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:43:57.855663  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:43:57.881228  202985 cri.go:89] found id: ""
	I1212 20:43:57.881251  202985 logs.go:282] 0 containers: []
	W1212 20:43:57.881259  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:43:57.881265  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:43:57.881327  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:43:57.905175  202985 cri.go:89] found id: ""
	I1212 20:43:57.905198  202985 logs.go:282] 0 containers: []
	W1212 20:43:57.905206  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:43:57.905218  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:43:57.905230  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:43:57.962185  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:43:57.962220  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:43:57.975676  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:43:57.975704  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:43:58.013226  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:43:58.013264  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:43:58.049123  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:43:58.049155  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:43:58.079359  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:43:58.079391  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:43:58.146658  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:43:58.146679  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:43:58.146691  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:43:58.180399  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:43:58.180429  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:43:58.228904  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:43:58.228934  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:00.781002  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:00.791305  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:44:00.791375  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:44:00.818731  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:00.818753  202985 cri.go:89] found id: ""
	I1212 20:44:00.818762  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:44:00.818817  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:00.822290  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:44:00.822360  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:44:00.850204  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:00.850227  202985 cri.go:89] found id: ""
	I1212 20:44:00.850234  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:44:00.850290  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:00.854017  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:44:00.854098  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:44:00.879405  202985 cri.go:89] found id: ""
	I1212 20:44:00.879428  202985 logs.go:282] 0 containers: []
	W1212 20:44:00.879437  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:44:00.879443  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:44:00.879505  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:44:00.904395  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:00.904415  202985 cri.go:89] found id: ""
	I1212 20:44:00.904423  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:44:00.904479  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:00.908318  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:44:00.908391  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:44:00.934602  202985 cri.go:89] found id: ""
	I1212 20:44:00.934625  202985 logs.go:282] 0 containers: []
	W1212 20:44:00.934633  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:44:00.934639  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:44:00.934695  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:44:00.963982  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:00.964006  202985 cri.go:89] found id: ""
	I1212 20:44:00.964013  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:44:00.964075  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:00.967644  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:44:00.967710  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:44:00.993298  202985 cri.go:89] found id: ""
	I1212 20:44:00.993321  202985 logs.go:282] 0 containers: []
	W1212 20:44:00.993329  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:44:00.993335  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:44:00.993393  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:44:01.020650  202985 cri.go:89] found id: ""
	I1212 20:44:01.020673  202985 logs.go:282] 0 containers: []
	W1212 20:44:01.020682  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:44:01.020696  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:44:01.020708  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:44:01.084071  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:44:01.084105  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:44:01.152805  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:44:01.152829  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:44:01.152848  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:01.191057  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:44:01.191097  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:01.247475  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:44:01.247509  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:44:01.263371  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:44:01.263399  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:01.309074  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:44:01.309108  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:01.345097  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:44:01.345127  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:44:01.375162  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:44:01.375194  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:03.910206  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:03.920927  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:44:03.921000  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:44:03.963013  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:03.963035  202985 cri.go:89] found id: ""
	I1212 20:44:03.963043  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:44:03.963096  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:03.967730  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:44:03.967806  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:44:03.998532  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:03.998553  202985 cri.go:89] found id: ""
	I1212 20:44:03.998567  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:44:03.998624  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:04.002636  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:44:04.002709  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:44:04.032598  202985 cri.go:89] found id: ""
	I1212 20:44:04.032620  202985 logs.go:282] 0 containers: []
	W1212 20:44:04.032628  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:44:04.032634  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:44:04.032693  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:44:04.061954  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:04.061976  202985 cri.go:89] found id: ""
	I1212 20:44:04.061984  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:44:04.062040  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:04.066241  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:44:04.066313  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:44:04.103751  202985 cri.go:89] found id: ""
	I1212 20:44:04.103778  202985 logs.go:282] 0 containers: []
	W1212 20:44:04.103786  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:44:04.103792  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:44:04.103861  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:44:04.139829  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:04.139885  202985 cri.go:89] found id: ""
	I1212 20:44:04.139894  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:44:04.139967  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:04.145405  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:44:04.145480  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:44:04.182139  202985 cri.go:89] found id: ""
	I1212 20:44:04.182165  202985 logs.go:282] 0 containers: []
	W1212 20:44:04.182174  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:44:04.182180  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:44:04.182236  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:44:04.238599  202985 cri.go:89] found id: ""
	I1212 20:44:04.238623  202985 logs.go:282] 0 containers: []
	W1212 20:44:04.238631  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:44:04.238644  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:44:04.238655  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:44:04.306009  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:44:04.306042  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:44:04.318569  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:44:04.318596  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:44:04.384694  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:44:04.384714  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:44:04.384728  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:04.417605  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:44:04.417636  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:04.455669  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:44:04.455698  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:04.483448  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:44:04.483475  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:04.522657  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:44:04.522695  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:04.553728  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:44:04.553759  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:44:07.083401  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:07.094301  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:44:07.094402  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:44:07.120557  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:07.120575  202985 cri.go:89] found id: ""
	I1212 20:44:07.120582  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:44:07.120641  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:07.124672  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:44:07.124793  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:44:07.150450  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:07.150468  202985 cri.go:89] found id: ""
	I1212 20:44:07.150479  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:44:07.150531  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:07.154339  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:44:07.154392  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:44:07.188585  202985 cri.go:89] found id: ""
	I1212 20:44:07.188610  202985 logs.go:282] 0 containers: []
	W1212 20:44:07.188619  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:44:07.188624  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:44:07.188681  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:44:07.256832  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:07.256855  202985 cri.go:89] found id: ""
	I1212 20:44:07.256863  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:44:07.256934  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:07.268216  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:44:07.268286  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:44:07.310192  202985 cri.go:89] found id: ""
	I1212 20:44:07.310216  202985 logs.go:282] 0 containers: []
	W1212 20:44:07.310224  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:44:07.310230  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:44:07.310291  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:44:07.344355  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:07.344378  202985 cri.go:89] found id: ""
	I1212 20:44:07.344386  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:44:07.344441  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:07.348491  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:44:07.348574  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:44:07.391361  202985 cri.go:89] found id: ""
	I1212 20:44:07.391383  202985 logs.go:282] 0 containers: []
	W1212 20:44:07.391392  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:44:07.391398  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:44:07.391472  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:44:07.420853  202985 cri.go:89] found id: ""
	I1212 20:44:07.420881  202985 logs.go:282] 0 containers: []
	W1212 20:44:07.420890  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:44:07.420903  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:44:07.420915  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:07.466631  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:44:07.467391  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:07.519769  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:44:07.519805  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:44:07.552613  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:44:07.552646  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:44:07.635464  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:44:07.635489  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:44:07.635502  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:07.673803  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:44:07.673839  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:07.738227  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:44:07.738319  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:07.778810  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:44:07.778894  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:44:07.839296  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:44:07.839330  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:44:10.352580  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:10.365003  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:44:10.365081  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:44:10.400478  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:10.400496  202985 cri.go:89] found id: ""
	I1212 20:44:10.400504  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:44:10.400565  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:10.404689  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:44:10.404772  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:44:10.436506  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:10.436528  202985 cri.go:89] found id: ""
	I1212 20:44:10.436536  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:44:10.436601  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:10.440516  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:44:10.440589  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:44:10.478323  202985 cri.go:89] found id: ""
	I1212 20:44:10.478411  202985 logs.go:282] 0 containers: []
	W1212 20:44:10.478441  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:44:10.478486  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:44:10.478598  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:44:10.512967  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:10.513037  202985 cri.go:89] found id: ""
	I1212 20:44:10.513073  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:44:10.513167  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:10.519865  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:44:10.519984  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:44:10.548058  202985 cri.go:89] found id: ""
	I1212 20:44:10.548133  202985 logs.go:282] 0 containers: []
	W1212 20:44:10.548156  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:44:10.548175  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:44:10.548260  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:44:10.582272  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:10.582341  202985 cri.go:89] found id: ""
	I1212 20:44:10.582362  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:44:10.582449  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:10.587691  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:44:10.587814  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:44:10.632379  202985 cri.go:89] found id: ""
	I1212 20:44:10.632500  202985 logs.go:282] 0 containers: []
	W1212 20:44:10.632538  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:44:10.632578  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:44:10.632692  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:44:10.664847  202985 cri.go:89] found id: ""
	I1212 20:44:10.664921  202985 logs.go:282] 0 containers: []
	W1212 20:44:10.664960  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:44:10.664992  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:44:10.665017  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:44:10.740494  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:44:10.740572  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:44:10.755578  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:44:10.755649  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:44:10.850837  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:44:10.850855  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:44:10.850868  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:10.895578  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:44:10.895652  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:10.940384  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:44:10.940419  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:44:11.013991  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:44:11.014036  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:11.078036  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:44:11.078067  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:11.116861  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:44:11.116906  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:13.653630  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:13.663707  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:44:13.663773  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:44:13.692386  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:13.692408  202985 cri.go:89] found id: ""
	I1212 20:44:13.692415  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:44:13.692473  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:13.695975  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:44:13.696051  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:44:13.728289  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:13.728310  202985 cri.go:89] found id: ""
	I1212 20:44:13.728319  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:44:13.728375  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:13.731887  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:44:13.731962  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:44:13.758531  202985 cri.go:89] found id: ""
	I1212 20:44:13.758552  202985 logs.go:282] 0 containers: []
	W1212 20:44:13.758561  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:44:13.758566  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:44:13.758621  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:44:13.783266  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:13.783285  202985 cri.go:89] found id: ""
	I1212 20:44:13.783293  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:44:13.783345  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:13.786760  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:44:13.786830  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:44:13.812081  202985 cri.go:89] found id: ""
	I1212 20:44:13.812103  202985 logs.go:282] 0 containers: []
	W1212 20:44:13.812111  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:44:13.812116  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:44:13.812171  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:44:13.836550  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:13.836575  202985 cri.go:89] found id: ""
	I1212 20:44:13.836584  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:44:13.836641  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:13.840055  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:44:13.840121  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:44:13.864451  202985 cri.go:89] found id: ""
	I1212 20:44:13.864473  202985 logs.go:282] 0 containers: []
	W1212 20:44:13.864481  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:44:13.864487  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:44:13.864547  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:44:13.890181  202985 cri.go:89] found id: ""
	I1212 20:44:13.890208  202985 logs.go:282] 0 containers: []
	W1212 20:44:13.890217  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:44:13.890230  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:44:13.890241  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:13.925155  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:44:13.925243  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:44:13.956252  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:44:13.956325  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:14.036389  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:44:14.036424  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:44:14.055165  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:44:14.055234  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:44:14.140609  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:44:14.140669  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:44:14.140697  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:14.182930  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:44:14.183000  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:44:14.249018  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:44:14.251046  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:14.310044  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:44:14.310089  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:16.857599  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:16.868043  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:44:16.868122  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:44:16.893323  202985 cri.go:89] found id: "564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:16.893344  202985 cri.go:89] found id: ""
	I1212 20:44:16.893352  202985 logs.go:282] 1 containers: [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7]
	I1212 20:44:16.893420  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:16.896953  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:44:16.897022  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:44:16.924766  202985 cri.go:89] found id: "6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:16.924785  202985 cri.go:89] found id: ""
	I1212 20:44:16.924793  202985 logs.go:282] 1 containers: [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748]
	I1212 20:44:16.924849  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:16.928343  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:44:16.928413  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:44:16.966680  202985 cri.go:89] found id: ""
	I1212 20:44:16.966703  202985 logs.go:282] 0 containers: []
	W1212 20:44:16.966711  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:44:16.966717  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:44:16.966772  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:44:16.993631  202985 cri.go:89] found id: "78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:16.993653  202985 cri.go:89] found id: ""
	I1212 20:44:16.993661  202985 logs.go:282] 1 containers: [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d]
	I1212 20:44:16.993714  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:16.997630  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:44:16.997700  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:44:17.028661  202985 cri.go:89] found id: ""
	I1212 20:44:17.028691  202985 logs.go:282] 0 containers: []
	W1212 20:44:17.028699  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:44:17.028705  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:44:17.028765  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:44:17.053643  202985 cri.go:89] found id: "a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:17.053661  202985 cri.go:89] found id: ""
	I1212 20:44:17.053669  202985 logs.go:282] 1 containers: [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f]
	I1212 20:44:17.053729  202985 ssh_runner.go:195] Run: which crictl
	I1212 20:44:17.057506  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:44:17.057584  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:44:17.083093  202985 cri.go:89] found id: ""
	I1212 20:44:17.083119  202985 logs.go:282] 0 containers: []
	W1212 20:44:17.083128  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:44:17.083134  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:44:17.083190  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:44:17.108951  202985 cri.go:89] found id: ""
	I1212 20:44:17.108974  202985 logs.go:282] 0 containers: []
	W1212 20:44:17.108982  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:44:17.109001  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:44:17.109013  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:44:17.121091  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:44:17.121120  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:44:17.185227  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:44:17.185247  202985 logs.go:123] Gathering logs for kube-scheduler [78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d] ...
	I1212 20:44:17.185267  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d"
	I1212 20:44:17.220906  202985 logs.go:123] Gathering logs for kube-controller-manager [a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f] ...
	I1212 20:44:17.220938  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f"
	I1212 20:44:17.252901  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:44:17.252934  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:44:17.281825  202985 logs.go:123] Gathering logs for kube-apiserver [564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7] ...
	I1212 20:44:17.281866  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7"
	I1212 20:44:17.329179  202985 logs.go:123] Gathering logs for etcd [6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748] ...
	I1212 20:44:17.329215  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748"
	I1212 20:44:17.366673  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:44:17.366704  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:44:17.406993  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:44:17.407018  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:44:19.965304  202985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:44:19.976188  202985 kubeadm.go:602] duration metric: took 4m3.948169683s to restartPrimaryControlPlane
	W1212 20:44:19.976254  202985 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:44:19.976328  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 20:44:20.471579  202985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:44:20.484860  202985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:44:20.492807  202985 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:44:20.492868  202985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:44:20.500421  202985 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:44:20.500441  202985 kubeadm.go:158] found existing configuration files:
	
	I1212 20:44:20.500490  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:44:20.508270  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:44:20.508337  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:44:20.515665  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:44:20.523745  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:44:20.523808  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:44:20.531743  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:44:20.541300  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:44:20.541371  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:44:20.549003  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:44:20.556690  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:44:20.556752  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:44:20.564144  202985 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:44:20.601873  202985 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:44:20.602141  202985 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:44:20.673932  202985 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:44:20.674080  202985 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:44:20.674175  202985 kubeadm.go:319] OS: Linux
	I1212 20:44:20.674253  202985 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:44:20.674339  202985 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:44:20.674423  202985 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:44:20.674507  202985 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:44:20.674590  202985 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:44:20.674672  202985 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:44:20.674750  202985 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:44:20.674830  202985 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:44:20.674912  202985 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:44:20.742851  202985 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:44:20.742965  202985 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:44:20.743062  202985 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:44:30.338757  202985 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:44:30.341674  202985 out.go:252]   - Generating certificates and keys ...
	I1212 20:44:30.341766  202985 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:44:30.341838  202985 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:44:30.341916  202985 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:44:30.341982  202985 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:44:30.342055  202985 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:44:30.342113  202985 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:44:30.342181  202985 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:44:30.342624  202985 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:44:30.343007  202985 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:44:30.343344  202985 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:44:30.343565  202985 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:44:30.343626  202985 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:44:30.478384  202985 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:44:30.556233  202985 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:44:30.984846  202985 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:44:31.067652  202985 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:44:31.573421  202985 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:44:31.574436  202985 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:44:31.577441  202985 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:44:31.580631  202985 out.go:252]   - Booting up control plane ...
	I1212 20:44:31.580731  202985 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:44:31.580832  202985 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:44:31.584090  202985 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:44:31.617825  202985 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:44:31.617953  202985 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:44:31.627405  202985 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:44:31.627520  202985 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:44:31.627564  202985 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:44:31.810112  202985 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:44:31.810236  202985 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:48:31.811069  202985 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001044243s
	I1212 20:48:31.811104  202985 kubeadm.go:319] 
	I1212 20:48:31.811162  202985 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:48:31.811216  202985 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:48:31.811340  202985 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:48:31.811351  202985 kubeadm.go:319] 
	I1212 20:48:31.811472  202985 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:48:31.811515  202985 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:48:31.811553  202985 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:48:31.811563  202985 kubeadm.go:319] 
	I1212 20:48:31.815458  202985 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:48:31.815908  202985 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:48:31.816175  202985 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:48:31.816473  202985 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:48:31.816489  202985 kubeadm.go:319] 
	I1212 20:48:31.816573  202985 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:48:31.816691  202985 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001044243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001044243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:48:31.816780  202985 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 20:48:32.228921  202985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:48:32.241808  202985 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:48:32.241874  202985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:48:32.249640  202985 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:48:32.249661  202985 kubeadm.go:158] found existing configuration files:
	
	I1212 20:48:32.249713  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:48:32.257630  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:48:32.257693  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:48:32.264828  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:48:32.272435  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:48:32.272503  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:48:32.279591  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:48:32.287045  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:48:32.287108  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:48:32.294497  202985 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:48:32.302272  202985 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:48:32.302335  202985 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:48:32.310054  202985 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:48:32.348154  202985 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:48:32.348287  202985 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:48:32.417613  202985 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:48:32.417733  202985 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:48:32.417803  202985 kubeadm.go:319] OS: Linux
	I1212 20:48:32.417873  202985 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:48:32.417945  202985 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:48:32.418019  202985 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:48:32.418093  202985 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:48:32.418169  202985 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:48:32.418242  202985 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:48:32.418313  202985 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:48:32.418388  202985 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:48:32.418475  202985 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:48:32.482859  202985 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:48:32.482975  202985 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:48:32.483082  202985 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:48:32.492285  202985 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:48:32.497962  202985 out.go:252]   - Generating certificates and keys ...
	I1212 20:48:32.498074  202985 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:48:32.498152  202985 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:48:32.498244  202985 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:48:32.498313  202985 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:48:32.498393  202985 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:48:32.498455  202985 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:48:32.498527  202985 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:48:32.498598  202985 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:48:32.498682  202985 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:48:32.498764  202985 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:48:32.498808  202985 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:48:32.498873  202985 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:48:32.661788  202985 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:48:33.028388  202985 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:48:33.763878  202985 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:48:33.989332  202985 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:48:34.356659  202985 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:48:34.357298  202985 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:48:34.360548  202985 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:48:34.363789  202985 out.go:252]   - Booting up control plane ...
	I1212 20:48:34.363905  202985 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:48:34.363993  202985 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:48:34.364753  202985 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:48:34.384629  202985 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:48:34.384747  202985 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:48:34.392772  202985 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:48:34.393106  202985 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:48:34.393151  202985 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:48:34.523505  202985 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:48:34.523716  202985 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:52:34.524939  202985 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001337229s
	I1212 20:52:34.524975  202985 kubeadm.go:319] 
	I1212 20:52:34.525029  202985 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:52:34.525065  202985 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:52:34.525168  202985 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:52:34.525178  202985 kubeadm.go:319] 
	I1212 20:52:34.525277  202985 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:52:34.525311  202985 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:52:34.525344  202985 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:52:34.525358  202985 kubeadm.go:319] 
	I1212 20:52:34.529517  202985 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:52:34.529922  202985 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:52:34.530028  202985 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:52:34.530281  202985 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:52:34.530290  202985 kubeadm.go:319] 
	I1212 20:52:34.530356  202985 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:52:34.530412  202985 kubeadm.go:403] duration metric: took 12m18.580331222s to StartCluster
	I1212 20:52:34.530448  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:52:34.530509  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:52:34.578040  202985 cri.go:89] found id: ""
	I1212 20:52:34.578062  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.578070  202985 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:52:34.578076  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:52:34.578133  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:52:34.633423  202985 cri.go:89] found id: ""
	I1212 20:52:34.633446  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.633454  202985 logs.go:284] No container was found matching "etcd"
	I1212 20:52:34.633461  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:52:34.633523  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:52:34.681901  202985 cri.go:89] found id: ""
	I1212 20:52:34.681927  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.681936  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:52:34.681942  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:52:34.681997  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:52:34.738330  202985 cri.go:89] found id: ""
	I1212 20:52:34.738351  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.738359  202985 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:52:34.738365  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:52:34.738422  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:52:34.776252  202985 cri.go:89] found id: ""
	I1212 20:52:34.776271  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.776280  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:52:34.776286  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:52:34.776344  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:52:34.818539  202985 cri.go:89] found id: ""
	I1212 20:52:34.818559  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.818567  202985 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:52:34.818574  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:52:34.818628  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:52:34.860454  202985 cri.go:89] found id: ""
	I1212 20:52:34.860523  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.860544  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:52:34.860562  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:52:34.860654  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:52:34.906068  202985 cri.go:89] found id: ""
	I1212 20:52:34.906139  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.906161  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:52:34.906184  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:52:34.906219  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:52:34.995461  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:52:34.995504  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:52:35.096407  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:52:35.096436  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:52:35.187451  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:52:35.187486  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:52:35.208601  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:52:35.208629  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:52:35.336018  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 20:52:35.336045  202985 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:52:35.336078  202985 out.go:285] * 
	* 
	W1212 20:52:35.336125  202985 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:52:35.336142  202985 out.go:285] * 
	* 
	W1212 20:52:35.338261  202985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:52:35.343746  202985 out.go:203] 
	W1212 20:52:35.347596  202985 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:52:35.347663  202985 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:52:35.347701  202985 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:52:35.350806  202985 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-016181 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-016181 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-016181 version --output=json: exit status 1 (216.890539ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-12 20:52:36.721441771 +0000 UTC m=+5023.815648930
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-016181
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-016181:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61",
	        "Created": "2025-12-12T20:39:24.284331908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:39:59.709843302Z",
	            "FinishedAt": "2025-12-12T20:39:58.661639873Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61/hosts",
	        "LogPath": "/var/lib/docker/containers/8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61/8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61-json.log",
	        "Name": "/kubernetes-upgrade-016181",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-016181:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-016181",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e93c0e8b99fbd7e21eaaa420513ed0562520dd1a99985093947a2c8d2447e61",
	                "LowerDir": "/var/lib/docker/overlay2/35b5282400668c27358bb95155864dbd25b6caeb1c3d39e5a28d512933be8227-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/35b5282400668c27358bb95155864dbd25b6caeb1c3d39e5a28d512933be8227/merged",
	                "UpperDir": "/var/lib/docker/overlay2/35b5282400668c27358bb95155864dbd25b6caeb1c3d39e5a28d512933be8227/diff",
	                "WorkDir": "/var/lib/docker/overlay2/35b5282400668c27358bb95155864dbd25b6caeb1c3d39e5a28d512933be8227/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-016181",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-016181/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-016181",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-016181",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-016181",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d456befe6257d837a55f03f916d7f904a4b1f77f4cb3c0761f55f26c744883ab",
	            "SandboxKey": "/var/run/docker/netns/d456befe6257",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-016181": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:62:71:af:6d:eb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e647f74d10c1d305a0b963bc02d58f48c90b2d61173cd28d6e163545409e758",
	                    "EndpointID": "e01a3772b3956bfab67c0fb5271fbb24d023f8b033fcad2a12b6a483f4ebf7be",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-016181",
	                        "8e93c0e8b99f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-016181 -n kubernetes-upgrade-016181
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-016181 -n kubernetes-upgrade-016181: exit status 2 (468.926612ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-016181 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-016181 logs -n 25: (1.090277079s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-455251 sudo systemctl status kubelet --all --full --no-pager                                           │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl cat kubelet --no-pager                                                           │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl status docker --all --full --no-pager                                            │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl cat docker --no-pager                                                            │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /etc/docker/daemon.json                                                                │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo docker system info                                                                         │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cri-dockerd --version                                                                      │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl cat containerd --no-pager                                                        │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo cat /etc/containerd/config.toml                                                            │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo containerd config dump                                                                     │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl status crio --all --full --no-pager                                              │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo systemctl cat crio --no-pager                                                              │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ ssh     │ -p cilium-455251 sudo crio config                                                                                │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	│ delete  │ -p cilium-455251                                                                                                 │ cilium-455251            │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │ 12 Dec 25 20:52 UTC │
	│ start   │ -p force-systemd-env-557154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-557154 │ jenkins │ v1.37.0 │ 12 Dec 25 20:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:52:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:52:10.543504  244873 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:52:10.543640  244873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:52:10.543653  244873 out.go:374] Setting ErrFile to fd 2...
	I1212 20:52:10.543659  244873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:52:10.544088  244873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:52:10.544605  244873 out.go:368] Setting JSON to false
	I1212 20:52:10.545489  244873 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5680,"bootTime":1765567051,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:52:10.545590  244873 start.go:143] virtualization:  
	I1212 20:52:10.549156  244873 out.go:179] * [force-systemd-env-557154] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:52:10.553075  244873 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:52:10.553161  244873 notify.go:221] Checking for updates...
	I1212 20:52:10.560586  244873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:52:10.563538  244873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:52:10.566460  244873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:52:10.569392  244873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:52:10.572250  244873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1212 20:52:10.575622  244873 config.go:182] Loaded profile config "kubernetes-upgrade-016181": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:52:10.575769  244873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:52:10.597362  244873 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:52:10.597492  244873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:52:10.667337  244873 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:52:10.658143757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:52:10.667446  244873 docker.go:319] overlay module found
	I1212 20:52:10.672304  244873 out.go:179] * Using the docker driver based on user configuration
	I1212 20:52:10.675177  244873 start.go:309] selected driver: docker
	I1212 20:52:10.675194  244873 start.go:927] validating driver "docker" against <nil>
	I1212 20:52:10.675207  244873 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:52:10.676049  244873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:52:10.724229  244873 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:52:10.715058157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:52:10.724394  244873 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:52:10.724613  244873 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 20:52:10.727515  244873 out.go:179] * Using Docker driver with root privileges
	I1212 20:52:10.730283  244873 cni.go:84] Creating CNI manager for ""
	I1212 20:52:10.730348  244873 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:52:10.730364  244873 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:52:10.730477  244873 start.go:353] cluster config:
	{Name:force-systemd-env-557154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-557154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:52:10.733556  244873 out.go:179] * Starting "force-systemd-env-557154" primary control-plane node in "force-systemd-env-557154" cluster
	I1212 20:52:10.736293  244873 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 20:52:10.739157  244873 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:52:10.741896  244873 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 20:52:10.741942  244873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1212 20:52:10.741953  244873 cache.go:65] Caching tarball of preloaded images
	I1212 20:52:10.741961  244873 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:52:10.742031  244873 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 20:52:10.742042  244873 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1212 20:52:10.742143  244873 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/config.json ...
	I1212 20:52:10.742161  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/config.json: {Name:mk8ba3df087c625b64f9cc45000606b01dd7a143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:10.762695  244873 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:52:10.762719  244873 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:52:10.762735  244873 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:52:10.762765  244873 start.go:360] acquireMachinesLock for force-systemd-env-557154: {Name:mk75a620aa60e1f7d48c93dea9e5e840a2f6c03a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:52:10.762890  244873 start.go:364] duration metric: took 104.07µs to acquireMachinesLock for "force-systemd-env-557154"
	I1212 20:52:10.762922  244873 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-557154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-557154 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 20:52:10.762998  244873 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:52:10.766604  244873 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 20:52:10.766865  244873 start.go:159] libmachine.API.Create for "force-systemd-env-557154" (driver="docker")
	I1212 20:52:10.766905  244873 client.go:173] LocalClient.Create starting
	I1212 20:52:10.766976  244873 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem
	I1212 20:52:10.767019  244873 main.go:143] libmachine: Decoding PEM data...
	I1212 20:52:10.767054  244873 main.go:143] libmachine: Parsing certificate...
	I1212 20:52:10.767111  244873 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem
	I1212 20:52:10.767137  244873 main.go:143] libmachine: Decoding PEM data...
	I1212 20:52:10.767155  244873 main.go:143] libmachine: Parsing certificate...
	I1212 20:52:10.767522  244873 cli_runner.go:164] Run: docker network inspect force-systemd-env-557154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:52:10.784496  244873 cli_runner.go:211] docker network inspect force-systemd-env-557154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:52:10.784580  244873 network_create.go:284] running [docker network inspect force-systemd-env-557154] to gather additional debugging logs...
	I1212 20:52:10.784604  244873 cli_runner.go:164] Run: docker network inspect force-systemd-env-557154
	W1212 20:52:10.801221  244873 cli_runner.go:211] docker network inspect force-systemd-env-557154 returned with exit code 1
	I1212 20:52:10.801261  244873 network_create.go:287] error running [docker network inspect force-systemd-env-557154]: docker network inspect force-systemd-env-557154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-557154 not found
	I1212 20:52:10.801275  244873 network_create.go:289] output of [docker network inspect force-systemd-env-557154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-557154 not found
	
	** /stderr **
	I1212 20:52:10.801369  244873 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:52:10.819719  244873 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c977eaa96b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:50:90:af:1f:c1} reservation:<nil>}
	I1212 20:52:10.820057  244873 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1c52f27b33b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:d3:02:ed:51:fa} reservation:<nil>}
	I1212 20:52:10.820413  244873 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1cbf9a6743e1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:40:b2:02:1a:bf} reservation:<nil>}
	I1212 20:52:10.820752  244873 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8e647f74d10c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:b3:19:6b:88:f2} reservation:<nil>}
	I1212 20:52:10.821303  244873 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f3b80}
	I1212 20:52:10.821331  244873 network_create.go:124] attempt to create docker network force-systemd-env-557154 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 20:52:10.821424  244873 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-557154 force-systemd-env-557154
	I1212 20:52:10.881072  244873 network_create.go:108] docker network force-systemd-env-557154 192.168.85.0/24 created
	I1212 20:52:10.881105  244873 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-557154" container
	I1212 20:52:10.881190  244873 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:52:10.898050  244873 cli_runner.go:164] Run: docker volume create force-systemd-env-557154 --label name.minikube.sigs.k8s.io=force-systemd-env-557154 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:52:10.917294  244873 oci.go:103] Successfully created a docker volume force-systemd-env-557154
	I1212 20:52:10.917393  244873 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-557154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-557154 --entrypoint /usr/bin/test -v force-systemd-env-557154:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:52:11.468293  244873 oci.go:107] Successfully prepared a docker volume force-systemd-env-557154
	I1212 20:52:11.468368  244873 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 20:52:11.468379  244873 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:52:11.468464  244873 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-557154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:52:15.457163  244873 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-557154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (3.988662755s)
	I1212 20:52:15.457198  244873 kic.go:203] duration metric: took 3.988814094s to extract preloaded images to volume ...
	W1212 20:52:15.457334  244873 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 20:52:15.457451  244873 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:52:15.535658  244873 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-557154 --name force-systemd-env-557154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-557154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-557154 --network force-systemd-env-557154 --ip 192.168.85.2 --volume force-systemd-env-557154:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:52:15.859134  244873 cli_runner.go:164] Run: docker container inspect force-systemd-env-557154 --format={{.State.Running}}
	I1212 20:52:15.885450  244873 cli_runner.go:164] Run: docker container inspect force-systemd-env-557154 --format={{.State.Status}}
	I1212 20:52:15.908108  244873 cli_runner.go:164] Run: docker exec force-systemd-env-557154 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:52:15.956336  244873 oci.go:144] the created container "force-systemd-env-557154" has a running status.
	I1212 20:52:15.956377  244873 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa...
	I1212 20:52:16.372586  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 20:52:16.372678  244873 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:52:16.400441  244873 cli_runner.go:164] Run: docker container inspect force-systemd-env-557154 --format={{.State.Status}}
	I1212 20:52:16.421572  244873 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:52:16.421596  244873 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-557154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:52:16.469081  244873 cli_runner.go:164] Run: docker container inspect force-systemd-env-557154 --format={{.State.Status}}
	I1212 20:52:16.488277  244873 machine.go:94] provisionDockerMachine start ...
	I1212 20:52:16.488381  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:16.505313  244873 main.go:143] libmachine: Using SSH client type: native
	I1212 20:52:16.505656  244873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1212 20:52:16.505672  244873 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:52:16.506329  244873 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:52:19.659138  244873 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-557154
	
	I1212 20:52:19.659161  244873 ubuntu.go:182] provisioning hostname "force-systemd-env-557154"
	I1212 20:52:19.659224  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:19.676107  244873 main.go:143] libmachine: Using SSH client type: native
	I1212 20:52:19.676427  244873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1212 20:52:19.676445  244873 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-557154 && echo "force-systemd-env-557154" | sudo tee /etc/hostname
	I1212 20:52:19.838146  244873 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-557154
	
	I1212 20:52:19.838221  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:19.857776  244873 main.go:143] libmachine: Using SSH client type: native
	I1212 20:52:19.858114  244873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1212 20:52:19.858134  244873 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-557154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-557154/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-557154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:52:20.018420  244873 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:52:20.018450  244873 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
	I1212 20:52:20.018471  244873 ubuntu.go:190] setting up certificates
	I1212 20:52:20.018480  244873 provision.go:84] configureAuth start
	I1212 20:52:20.018567  244873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-557154
	I1212 20:52:20.038467  244873 provision.go:143] copyHostCerts
	I1212 20:52:20.038516  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 20:52:20.038550  244873 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
	I1212 20:52:20.038557  244873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
	I1212 20:52:20.038637  244873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
	I1212 20:52:20.038733  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 20:52:20.038758  244873 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
	I1212 20:52:20.038763  244873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
	I1212 20:52:20.038794  244873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
	I1212 20:52:20.038844  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 20:52:20.038865  244873 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
	I1212 20:52:20.038870  244873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
	I1212 20:52:20.038895  244873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
	I1212 20:52:20.038954  244873 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-557154 san=[127.0.0.1 192.168.85.2 force-systemd-env-557154 localhost minikube]
	I1212 20:52:20.461270  244873 provision.go:177] copyRemoteCerts
	I1212 20:52:20.461337  244873 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:52:20.461390  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:20.479279  244873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa Username:docker}
	I1212 20:52:20.584244  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:52:20.584322  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:52:20.602016  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:52:20.602094  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 20:52:20.619055  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:52:20.619115  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:52:20.636467  244873 provision.go:87] duration metric: took 617.964741ms to configureAuth
	I1212 20:52:20.636511  244873 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:52:20.636696  244873 config.go:182] Loaded profile config "force-systemd-env-557154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:52:20.636707  244873 machine.go:97] duration metric: took 4.148411486s to provisionDockerMachine
	I1212 20:52:20.636714  244873 client.go:176] duration metric: took 9.869800125s to LocalClient.Create
	I1212 20:52:20.636733  244873 start.go:167] duration metric: took 9.86987021s to libmachine.API.Create "force-systemd-env-557154"
	I1212 20:52:20.636741  244873 start.go:293] postStartSetup for "force-systemd-env-557154" (driver="docker")
	I1212 20:52:20.636749  244873 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:52:20.636811  244873 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:52:20.636857  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:20.655939  244873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa Username:docker}
	I1212 20:52:20.763927  244873 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:52:20.767282  244873 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:52:20.767312  244873 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:52:20.767324  244873 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
	I1212 20:52:20.767377  244873 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
	I1212 20:52:20.767463  244873 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1212 20:52:20.767481  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1212 20:52:20.767584  244873 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:52:20.775138  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1212 20:52:20.792409  244873 start.go:296] duration metric: took 155.654406ms for postStartSetup
	I1212 20:52:20.792770  244873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-557154
	I1212 20:52:20.809334  244873 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/config.json ...
	I1212 20:52:20.809615  244873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:52:20.809667  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:20.826836  244873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa Username:docker}
	I1212 20:52:20.928721  244873 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:52:20.933116  244873 start.go:128] duration metric: took 10.170103873s to createHost
	I1212 20:52:20.933141  244873 start.go:83] releasing machines lock for "force-systemd-env-557154", held for 10.170235857s
	I1212 20:52:20.933209  244873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-557154
	I1212 20:52:20.949840  244873 ssh_runner.go:195] Run: cat /version.json
	I1212 20:52:20.949896  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:20.950155  244873 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:52:20.950210  244873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-557154
	I1212 20:52:20.974628  244873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa Username:docker}
	I1212 20:52:20.980161  244873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/force-systemd-env-557154/id_rsa Username:docker}
	I1212 20:52:21.075709  244873 ssh_runner.go:195] Run: systemctl --version
	I1212 20:52:21.164178  244873 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:52:21.168597  244873 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:52:21.168667  244873 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:52:21.195755  244873 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 20:52:21.195781  244873 start.go:496] detecting cgroup driver to use...
	I1212 20:52:21.195797  244873 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1212 20:52:21.195923  244873 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 20:52:21.211046  244873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 20:52:21.224409  244873 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:52:21.224482  244873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:52:21.241750  244873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:52:21.259949  244873 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:52:21.376744  244873 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:52:21.497713  244873 docker.go:234] disabling docker service ...
	I1212 20:52:21.497799  244873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:52:21.521869  244873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:52:21.535375  244873 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:52:21.652278  244873 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:52:21.765867  244873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:52:21.778138  244873 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:52:21.792633  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 20:52:21.801845  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 20:52:21.810348  244873 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1212 20:52:21.810436  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1212 20:52:21.819484  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:52:21.828119  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 20:52:21.836660  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 20:52:21.845469  244873 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:52:21.853508  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 20:52:21.862453  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 20:52:21.871022  244873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 20:52:21.879916  244873 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:52:21.887326  244873 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:52:21.894654  244873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:52:22.016401  244873 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 20:52:22.144691  244873 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 20:52:22.144806  244873 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 20:52:22.148588  244873 start.go:564] Will wait 60s for crictl version
	I1212 20:52:22.148667  244873 ssh_runner.go:195] Run: which crictl
	I1212 20:52:22.152055  244873 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:52:22.176096  244873 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 20:52:22.176209  244873 ssh_runner.go:195] Run: containerd --version
	I1212 20:52:22.209009  244873 ssh_runner.go:195] Run: containerd --version
	I1212 20:52:22.234665  244873 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1212 20:52:22.237641  244873 cli_runner.go:164] Run: docker network inspect force-systemd-env-557154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:52:22.258053  244873 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 20:52:22.261856  244873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:52:22.271378  244873 kubeadm.go:884] updating cluster {Name:force-systemd-env-557154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-557154 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:52:22.271496  244873 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 20:52:22.271570  244873 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:52:22.296890  244873 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 20:52:22.296919  244873 containerd.go:534] Images already preloaded, skipping extraction
	I1212 20:52:22.296982  244873 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:52:22.319689  244873 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 20:52:22.319712  244873 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:52:22.319720  244873 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1212 20:52:22.319819  244873 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-557154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-557154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:52:22.319999  244873 ssh_runner.go:195] Run: sudo crictl info
	I1212 20:52:22.344368  244873 cni.go:84] Creating CNI manager for ""
	I1212 20:52:22.344399  244873 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 20:52:22.344419  244873 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:52:22.344450  244873 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-557154 NodeName:force-systemd-env-557154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:52:22.344615  244873 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-557154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:52:22.344697  244873 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:52:22.352542  244873 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:52:22.352630  244873 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:52:22.360242  244873 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 20:52:22.372739  244873 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:52:22.385509  244873 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1212 20:52:22.398101  244873 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:52:22.401602  244873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:52:22.411097  244873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:52:22.522997  244873 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:52:22.539660  244873 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154 for IP: 192.168.85.2
	I1212 20:52:22.539683  244873 certs.go:195] generating shared ca certs ...
	I1212 20:52:22.539700  244873 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:22.539917  244873 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
	I1212 20:52:22.539962  244873 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
	I1212 20:52:22.539970  244873 certs.go:257] generating profile certs ...
	I1212 20:52:22.540029  244873 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/client.key
	I1212 20:52:22.540040  244873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/client.crt with IP's: []
	I1212 20:52:22.849471  244873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/client.crt ...
	I1212 20:52:22.849503  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/client.crt: {Name:mk0f1165758183923a711e5f0810104ded9d91e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:22.849743  244873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/client.key ...
	I1212 20:52:22.849760  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/client.key: {Name:mk3b4c985172ee42d2f133a497c2617ab4c40acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:22.849868  244873 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key.0c5647f3
	I1212 20:52:22.849893  244873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt.0c5647f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 20:52:23.158093  244873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt.0c5647f3 ...
	I1212 20:52:23.158129  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt.0c5647f3: {Name:mk006f41c52d54da1de245f3983c17ac4725036e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:23.158360  244873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key.0c5647f3 ...
	I1212 20:52:23.158380  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key.0c5647f3: {Name:mkee393bc462feb994efd04abcd4dd101fad9afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:23.158482  244873 certs.go:382] copying /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt.0c5647f3 -> /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt
	I1212 20:52:23.158567  244873 certs.go:386] copying /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key.0c5647f3 -> /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key
	I1212 20:52:23.158627  244873 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.key
	I1212 20:52:23.158645  244873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.crt with IP's: []
	I1212 20:52:23.231028  244873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.crt ...
	I1212 20:52:23.231061  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.crt: {Name:mk39baa3aa0af11e72870a72ecb77531b8a3350d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:23.231260  244873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.key ...
	I1212 20:52:23.231275  244873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.key: {Name:mk75e675c808e60a1dcad38294e39f1a6da6267c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:52:23.231377  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:52:23.231402  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:52:23.231420  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:52:23.231438  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:52:23.231456  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:52:23.231474  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:52:23.231492  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:52:23.231502  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:52:23.231562  244873 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
	W1212 20:52:23.231608  244873 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1212 20:52:23.231621  244873 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:52:23.231652  244873 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:52:23.231680  244873 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:52:23.231710  244873 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
	I1212 20:52:23.231767  244873 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1212 20:52:23.231815  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1212 20:52:23.231831  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1212 20:52:23.231877  244873 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:52:23.232488  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:52:23.251234  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:52:23.270477  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:52:23.288476  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:52:23.306101  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 20:52:23.323459  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:52:23.340920  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:52:23.358590  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/force-systemd-env-557154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:52:23.377020  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1212 20:52:23.394209  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1212 20:52:23.411948  244873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:52:23.429071  244873 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:52:23.441954  244873 ssh_runner.go:195] Run: openssl version
	I1212 20:52:23.448705  244873 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:52:23.455767  244873 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:52:23.465067  244873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:52:23.469241  244873 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:52:23.469354  244873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:52:23.515076  244873 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:52:23.522573  244873 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:52:23.530091  244873 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1212 20:52:23.537614  244873 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1212 20:52:23.545124  244873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1212 20:52:23.548894  244873 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
	I1212 20:52:23.548962  244873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1212 20:52:23.590052  244873 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:52:23.597744  244873 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1212 20:52:23.605188  244873 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1212 20:52:23.612567  244873 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1212 20:52:23.619919  244873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1212 20:52:23.623432  244873 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
	I1212 20:52:23.623533  244873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1212 20:52:23.664408  244873 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:52:23.671614  244873 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:52:23.679358  244873 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:52:23.682910  244873 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:52:23.682961  244873 kubeadm.go:401] StartCluster: {Name:force-systemd-env-557154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-557154 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:52:23.683077  244873 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 20:52:23.683136  244873 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:52:23.725262  244873 cri.go:89] found id: ""
	I1212 20:52:23.725337  244873 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:52:23.735526  244873 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:52:23.744338  244873 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:52:23.744410  244873 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:52:23.754542  244873 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:52:23.754563  244873 kubeadm.go:158] found existing configuration files:
	
	I1212 20:52:23.754612  244873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:52:23.763669  244873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:52:23.763742  244873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:52:23.773690  244873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:52:23.781563  244873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:52:23.781629  244873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:52:23.789209  244873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:52:23.796842  244873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:52:23.796942  244873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:52:23.804675  244873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:52:23.812592  244873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:52:23.812664  244873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:52:23.819955  244873 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:52:23.859259  244873 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:52:23.859593  244873 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:52:23.883284  244873 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:52:23.883379  244873 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:52:23.883433  244873 kubeadm.go:319] OS: Linux
	I1212 20:52:23.883496  244873 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:52:23.883575  244873 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:52:23.883642  244873 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:52:23.883707  244873 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:52:23.883770  244873 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:52:23.883894  244873 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:52:23.883956  244873 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:52:23.884022  244873 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:52:23.884084  244873 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:52:23.952702  244873 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:52:23.952855  244873 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:52:23.952982  244873 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:52:23.959406  244873 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:52:23.966404  244873 out.go:252]   - Generating certificates and keys ...
	I1212 20:52:23.966531  244873 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:52:23.966643  244873 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:52:24.440950  244873 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:52:24.584902  244873 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:52:25.180768  244873 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:52:25.392535  244873 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:52:26.073683  244873 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:52:26.073841  244873 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-557154 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:52:27.319721  244873 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:52:27.320088  244873 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-557154 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 20:52:27.846944  244873 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:52:28.251100  244873 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:52:29.110728  244873 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:52:29.111065  244873 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:52:29.424805  244873 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:52:29.857467  244873 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:52:30.243251  244873 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:52:31.043929  244873 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:52:31.839552  244873 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:52:31.840341  244873 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:52:31.842984  244873 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:52:34.524939  202985 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001337229s
	I1212 20:52:34.524975  202985 kubeadm.go:319] 
	I1212 20:52:34.525029  202985 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:52:34.525065  202985 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:52:34.525168  202985 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:52:34.525178  202985 kubeadm.go:319] 
	I1212 20:52:34.525277  202985 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:52:34.525311  202985 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:52:34.525344  202985 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:52:34.525358  202985 kubeadm.go:319] 
	I1212 20:52:34.529517  202985 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:52:34.529922  202985 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:52:34.530028  202985 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:52:34.530281  202985 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:52:34.530290  202985 kubeadm.go:319] 
	I1212 20:52:34.530356  202985 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:52:34.530412  202985 kubeadm.go:403] duration metric: took 12m18.580331222s to StartCluster
	I1212 20:52:34.530448  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:52:34.530509  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:52:34.578040  202985 cri.go:89] found id: ""
	I1212 20:52:34.578062  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.578070  202985 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:52:34.578076  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 20:52:34.578133  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:52:34.633423  202985 cri.go:89] found id: ""
	I1212 20:52:34.633446  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.633454  202985 logs.go:284] No container was found matching "etcd"
	I1212 20:52:34.633461  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 20:52:34.633523  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:52:34.681901  202985 cri.go:89] found id: ""
	I1212 20:52:34.681927  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.681936  202985 logs.go:284] No container was found matching "coredns"
	I1212 20:52:34.681942  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:52:34.681997  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:52:34.738330  202985 cri.go:89] found id: ""
	I1212 20:52:34.738351  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.738359  202985 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:52:34.738365  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:52:34.738422  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:52:34.776252  202985 cri.go:89] found id: ""
	I1212 20:52:34.776271  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.776280  202985 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:52:34.776286  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:52:34.776344  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:52:34.818539  202985 cri.go:89] found id: ""
	I1212 20:52:34.818559  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.818567  202985 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:52:34.818574  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 20:52:34.818628  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:52:34.860454  202985 cri.go:89] found id: ""
	I1212 20:52:34.860523  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.860544  202985 logs.go:284] No container was found matching "kindnet"
	I1212 20:52:34.860562  202985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 20:52:34.860654  202985 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 20:52:34.906068  202985 cri.go:89] found id: ""
	I1212 20:52:34.906139  202985 logs.go:282] 0 containers: []
	W1212 20:52:34.906161  202985 logs.go:284] No container was found matching "storage-provisioner"
	I1212 20:52:34.906184  202985 logs.go:123] Gathering logs for containerd ...
	I1212 20:52:34.906219  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 20:52:34.995461  202985 logs.go:123] Gathering logs for container status ...
	I1212 20:52:34.995504  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:52:35.096407  202985 logs.go:123] Gathering logs for kubelet ...
	I1212 20:52:35.096436  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:52:35.187451  202985 logs.go:123] Gathering logs for dmesg ...
	I1212 20:52:35.187486  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:52:35.208601  202985 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:52:35.208629  202985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:52:35.336018  202985 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 20:52:35.336045  202985 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:52:35.336078  202985 out.go:285] * 
	W1212 20:52:35.336125  202985 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:52:35.336142  202985 out.go:285] * 
	W1212 20:52:35.338261  202985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:52:35.343746  202985 out.go:203] 
	W1212 20:52:35.347596  202985 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001337229s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:52:35.347663  202985 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:52:35.347701  202985 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:52:35.350806  202985 out.go:203] 
	I1212 20:52:31.846722  244873 out.go:252]   - Booting up control plane ...
	I1212 20:52:31.846847  244873 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:52:31.846935  244873 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:52:31.847914  244873 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:52:31.864641  244873 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:52:31.864769  244873 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:52:31.871901  244873 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:52:31.872232  244873 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:52:31.872278  244873 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:52:32.019047  244873 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:52:32.019176  244873 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:52:33.018578  244873 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000897875s
	I1212 20:52:33.022116  244873 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:52:33.022211  244873 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1212 20:52:33.022523  244873 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:52:33.022608  244873 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 20:44:27 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:27.955013227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:44:27 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:27.956358736Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.357752691s"
	Dec 12 20:44:27 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:27.956408203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 12 20:44:27 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:27.957755927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.660085998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.664665752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.667916014Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.675418639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.678681019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 720.785642ms"
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.678890498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 12 20:44:28 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:28.680020071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 12 20:44:30 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:30.327811427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:44:30 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:30.329561091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 12 20:44:30 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:30.332537663Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:44:30 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:30.336786154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 20:44:30 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:30.337883801Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.657707403s"
	Dec 12 20:44:30 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:44:30.337931135Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.403883828Z" level=info msg="container event discarded" container=564c54821d75e04ddf2d5a9a6f0838a79702f912874aca0a4e42cdf49799e5b7 type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.419224075Z" level=info msg="container event discarded" container=edbb6777c9149a6df9191f4764263faf8efa5eba2e1026ebd3433c19fd587f65 type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.430506525Z" level=info msg="container event discarded" container=78a134497533fbe234145f371dbf0500d4844f75f8d3801c759d54377800eb5d type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.430560915Z" level=info msg="container event discarded" container=0cbd5582bb64ba80018e84e16e40b09698e48625e1f297f333f1fdc93928ed34 type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.449002902Z" level=info msg="container event discarded" container=6d78433a2941681a1a3447b5028f0187f2857bfa6ed6ca6da7bf6b2bcfe65748 type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.449072709Z" level=info msg="container event discarded" container=30af23be354344abed5b0a0afa5f58a2fa2fb87c3b4c5fd9008c4a57a50257df type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.465398215Z" level=info msg="container event discarded" container=a029c87f8febba31d652858e1441658b5720d86df4cf187016df3a48be14678f type=CONTAINER_DELETED_EVENT
	Dec 12 20:49:20 kubernetes-upgrade-016181 containerd[556]: time="2025-12-12T20:49:20.465467226Z" level=info msg="container event discarded" container=3b4cf82e3f0c3bb0ac6c22fac71fae569eca6316891ab6fd9d3afb8bab6792f8 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.524348] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 20:22] hrtimer: interrupt took 15023466 ns
	
	
	==> kernel <==
	 20:52:38 up  1:35,  0 user,  load average: 3.44, 2.16, 1.93
	Linux kubernetes-upgrade-016181 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:52:34 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:52:34 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 20:52:34 kubernetes-upgrade-016181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:34 kubernetes-upgrade-016181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:35 kubernetes-upgrade-016181 kubelet[14355]: E1212 20:52:35.042245   14355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:35 kubernetes-upgrade-016181 kubelet[14383]: E1212 20:52:35.847582   14383 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:52:35 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:52:36 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 20:52:36 kubernetes-upgrade-016181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:36 kubernetes-upgrade-016181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:36 kubernetes-upgrade-016181 kubelet[14389]: E1212 20:52:36.812961   14389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:52:36 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:52:36 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:52:37 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 20:52:37 kubernetes-upgrade-016181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:37 kubernetes-upgrade-016181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:52:37 kubernetes-upgrade-016181 kubelet[14409]: E1212 20:52:37.586884   14409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:52:37 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:52:37 kubernetes-upgrade-016181 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-016181 -n kubernetes-upgrade-016181
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-016181 -n kubernetes-upgrade-016181: exit status 2 (457.356488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-016181" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-016181" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-016181
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-016181: (2.718733831s)
--- FAIL: TestKubernetesUpgrade (804.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.077s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:24:44.018587    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kindnet-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:24:51.907066    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:24:57.796793    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/auto-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:25:24.980434    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kindnet-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1212 21:25:37.036397    4120 config.go:182] Loaded profile config "flannel-455251": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:25:41.039923    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:25:41.361525    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:25:42.003716    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:25:43.291988    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:25:45.854365    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:25:50.976331    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:26:01.217867    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:26:21.700170    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:26:46.902111    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kindnet-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:02.661980    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:13.933950    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/auto-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:16.756172    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:16.762641    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:16.774181    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:16.795687    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:16.837037    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:16.918513    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:17.080731    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:17.402279    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:18.044591    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:19.326954    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:21.888550    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:22.897028    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:27.009903    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:27.186661    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/default-k8s-diff-port-983514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:37.251625    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:41.638806    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/auto-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:48.857034    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:27:57.732873    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:28:24.583671    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 21:28:38.694861    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/custom-flannel-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestStartStop (38m40s)
		TestStartStop/group/no-preload (29m54s)
		TestStartStop/group/no-preload/serial (29m54s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (4m15s)

                                                
                                                
goroutine 6748 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000614000, 0x400094dbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40007b4090, {0x534c680, 0x2c, 0x2c}, {0x400094dd08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x4000346820)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x4000346820)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 4568 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001752120, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4563
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4900 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4899
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3972 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004ecf260, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3919
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1136 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001a0d380, 0x4001b1a230)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 850
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2012 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x4000171680, 0x4001af9110)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2011
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3829 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40002bfcd0, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002bfcc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f69740)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001cda770?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x4001ce26a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x4001646f38, {0x369e520, 0x4001904120}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001ce27a8?, {0x369e520?, 0x4001904120?}, 0x60?, 0x40002a0380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001cc4b90, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3842
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5524 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001ce4f40, 0x4001ce4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x0?, 0x4001ce4f40, 0x4001ce4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x400151b7a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x400151b6c0?, 0x0?, 0x40006994a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5488
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5523 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40018e90d0, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018e90c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001b9c300)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001a40620?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x4001ce3ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x400010cf38, {0x369e520, 0x4000946360}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4000946360?}, 0x1?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400192f770, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5488
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6470 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f633e0, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6457
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3842 [chan receive, 36 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f69740, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3821
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3513 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000584000, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3330
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6107 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40018e8c10, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018e8c00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006af140)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001b1ad20?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x40000a3ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x40013d2f38, {0x369e520, 0x4000705e60}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4000705e60?}, 0x20?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a356a0, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6127
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5833 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x4001cc6910, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001cc6900)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400025fe00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001b1bc70?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x4001624808?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x40013d6f38, {0x369e520, 0x40019d0810}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001ce77a8?, {0x369e520?, 0x40019d0810?}, 0x0?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a35020, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5846
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1177 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x4001b8a000)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1175
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 5846 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400025fe00, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5827
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3831 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3830
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 126 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006af560, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 159
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 161 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x40013b7f40, 0x4001644f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x84?, 0x40013b7f40, 0x40013b7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4004f1aa80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 126
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 162 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 161
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 125 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4000170d80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 159
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 160 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400091f710, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400091f700)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006af560)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002761c0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x4001647f38, {0x369e520, 0x4004eebc50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4004eebc50?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f1d080, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 126
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 727 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff71b59000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400092ba00?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400092ba00)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400092ba00)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40002be940)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40002be940)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000153800, {0x36d4000, 0x40002be940})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000153800)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 725
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4294 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40002bf950, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002bf940)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400025f3e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001af9420?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x4001ce66a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x400010df38, {0x369e520, 0x4001ad0300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4001ad0300?}, 0x0?, 0x4000796a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400160c1a0, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4272
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4272 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400025f3e0, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4270
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6127 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006af140, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5835 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5834
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4899 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001ce5f40, 0x4001ce5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x0?, 0x4001ce5f40, 0x4001ce5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x40019342a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000170f00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4879
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5192 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40006f4000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5191
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5834 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x5a?, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x0?, 0x40000a1750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x4000224080?, 0x40006b8000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5846
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6126 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4000614380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4878 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40006f4000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4877
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2111 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x4000612600, 0x4004f0bc00)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1524
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1593 [chan receive, 81 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40002ae3c0, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1591
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1607 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001b94bd0, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b94bc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40002ae3c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002f72d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x40013d9f38, {0x369e520, 0x4001bb21e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4001bb21e0?}, 0xa0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001dc1860, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1593
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4271 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4000170780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4270
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1608 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x40000a1740, 0x4001642f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x58?, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x4001a0cc00?, 0x4001a36140?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000612c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1593
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4543 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4004eb4190, 0x12)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004eb4180)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001752120)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001934620?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x40013b4ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x4001353f38, {0x369e520, 0x40015f7bc0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x40015f7bc0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019f2bd0, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4568
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4100 [chan receive, 4 minutes]:
testing.(*T).Run(0x40006b9c00, {0x2994231?, 0x40000006ee?}, 0x40002b6000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40006b9c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40006b9c00, 0x40014d7480)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3517
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5845 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40006b8000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5827
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3830 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001ce5740, 0x40013d5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x84?, 0x4001ce5740, 0x4001ce5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x400153f7a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400228ac00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3842
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 916 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 915
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5196 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40016e8110, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016e8100)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001b9d320)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002a6a80?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x4001828f38, {0x369e520, 0x40019abec0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40019abec0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40006a2b70, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5193
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6469 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4001998780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6457
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5197 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001ce6f40, 0x4001ce6f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x0?, 0x4001ce6f40, 0x4001ce6f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x40018587e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001858700?, 0x0?, 0x4001510c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5193
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4567 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40006b9880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4563
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1149 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001b6a300, 0x4001b1a930)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1148
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 914 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001cc79d0, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001cc79c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001b9d980)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001bd5b90?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x400134df38, {0x369e520, 0x40019ef560}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40019ef560?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40014d12b0, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 934
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 915 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x400134ef40, 0x400134ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x31?, 0x400134ef40, 0x400134ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x161f90?, 0x4000398700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000613080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 934
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 933 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40006b9c00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 932
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 934 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001b9d980, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 932
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4295 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x40000a7f40, 0x40000a7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x0?, 0x40000a7f40, 0x40000a7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x4004f0a850?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4004f0a770?, 0x0?, 0x40013eba40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4272
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5198 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5197
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5487 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x400161b180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5512
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3841 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40013cec40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3821
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5525 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5524
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4544 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001ce9f40, 0x4001ce9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x30?, 0x4001ce9f40, 0x4001ce9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x4004f0a5b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001505080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4568
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1592 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4000170d80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1591
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5488 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001b9c300, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5512
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5193 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001b9d320, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5191
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1609 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1608
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1362 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff71b58800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40014d7680?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40014d7680)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40014d7680)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001b95400)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001b95400)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001ab0400, {0x36d4000, 0x4001b95400})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001ab0400)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1360
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 2017 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x4000612780, 0x4001cda8c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2000
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3971 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40006b8000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3919
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3330 [chan receive, 40 minutes]:
testing.(*T).Run(0x4000584e00, {0x296d71f?, 0x400134ff58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4000584e00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4000584e00, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4898 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40016e9a10, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016e9a00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f63380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001d1a688?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x4001d1a6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x400010af38, {0x369e520, 0x400060a9c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x400060a9c0?}, 0x1?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400070fa70, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4879
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1178 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x4001b8a000)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1175
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 3959 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3958
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4577 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4544
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3517 [chan receive, 30 minutes]:
testing.(*T).Run(0x4000585c00, {0x296eb91?, 0x0?}, 0x40014d7480)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4000585c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4000585c00, 0x4001b94100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3513
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1063 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400192bb00, 0x4001934cb0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1086
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6109 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6108
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3958 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001414f40, 0x4001414f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x50?, 0x4001414f40, 0x4001414f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001524c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3972
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6108 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x40013b8740, 0x40013b8788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x64?, 0x40013b8740, 0x40013b8788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x0?, 0x40013b8750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x4000224080?, 0x4000614380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6127
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6022 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e65a8, 0x40019639f0}, {0x36d4660, 0x4001bd3960}, 0x1, 0x0, 0x4001437b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40002a02a0?}, 0x3b9aca00, 0x4001437d28?, 0x1, 0x4001437b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40002a02a0}, 0x40006f4000, {0x4001a9bc68, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6618, 0x40002a02a0}, 0x40006f4000, {0x4001a9bc68, 0x11}, {0x29786f9?, 0x3720f83800161e84?}, {0x693c8815?, 0x4001829f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40006f4000?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40006f4000, 0x40002b6000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4100
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3957 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001b94750, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b94740)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004ecf260)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000085ce0?, 0x90a7b203a226769?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x2020202020202020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x400157bf38, {0x369e520, 0x40006f6030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x20090a2c7d202020?, {0x369e520?, 0x40006f6030?}, 0x6f?, 0x74636e756622203a?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001bd0020, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3972
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4296 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4295
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4879 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f63380, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4877
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6473 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001b95b10, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b95b00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f633e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40019e0230?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000106230?}, 0x4001ce3ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000106230}, 0x4001411f38, {0x369e520, 0x4000946390}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001ce3fa8?, {0x369e520?, 0x4000946390?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40006bc960, 0x3b9aca00, 0x0, 0x1, 0x4000106230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6470
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6474 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000106230}, 0x4001d18740, 0x4001d18788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000106230}, 0x0?, 0x4001d18740, 0x4001d18788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000106230?}, 0x36e6618?, 0x4001adfce0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001adfc00?, 0x0?, 0x400161b180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6470
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6475 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6474
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                    

Test pass (307/369)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.6
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.11
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.18
12 TestDownloadOnly/v1.34.2/json-events 12.21
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 12.97
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.12
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 167.1
38 TestAddons/serial/Volcano 40.79
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 8.83
44 TestAddons/parallel/Registry 16.3
45 TestAddons/parallel/RegistryCreds 0.71
46 TestAddons/parallel/Ingress 19.37
47 TestAddons/parallel/InspektorGadget 11.94
48 TestAddons/parallel/MetricsServer 6.87
50 TestAddons/parallel/CSI 51.76
51 TestAddons/parallel/Headlamp 16.81
52 TestAddons/parallel/CloudSpanner 6.99
53 TestAddons/parallel/LocalPath 51.41
54 TestAddons/parallel/NvidiaDevicePlugin 6.64
55 TestAddons/parallel/Yakd 11.97
57 TestAddons/StoppedEnableDisable 12.34
58 TestCertOptions 40.5
59 TestCertExpiration 232
61 TestForceSystemdFlag 33.79
62 TestForceSystemdEnv 39.1
63 TestDockerEnvContainerd 46.56
67 TestErrorSpam/setup 31.37
68 TestErrorSpam/start 0.85
69 TestErrorSpam/status 1.19
70 TestErrorSpam/pause 1.75
71 TestErrorSpam/unpause 1.72
72 TestErrorSpam/stop 2.25
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.96
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.19
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
84 TestFunctional/serial/CacheCmd/cache/add_local 1.33
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.15
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 69.35
93 TestFunctional/serial/ComponentHealth 0.09
94 TestFunctional/serial/LogsCmd 1.59
95 TestFunctional/serial/LogsFileCmd 1.45
96 TestFunctional/serial/InvalidService 5.27
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 8.81
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 1.1
106 TestFunctional/parallel/ServiceCmdConnect 8.63
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 23.24
110 TestFunctional/parallel/SSHCmd 0.72
111 TestFunctional/parallel/CpCmd 2.14
113 TestFunctional/parallel/FileSync 0.38
114 TestFunctional/parallel/CertSync 2.17
118 TestFunctional/parallel/NodeLabels 0.14
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
122 TestFunctional/parallel/License 0.33
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.72
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.43
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/ServiceCmd/List 0.71
139 TestFunctional/parallel/MountCmd/any-port 8.54
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
142 TestFunctional/parallel/ServiceCmd/Format 0.51
143 TestFunctional/parallel/ServiceCmd/URL 0.46
144 TestFunctional/parallel/MountCmd/specific-port 2.49
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.56
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 1.31
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
153 TestFunctional/parallel/ImageCommands/Setup 0.73
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
157 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.39
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.4
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.83
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.07
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.98
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.42
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.23
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.13
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.71
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.18
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.71
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.3
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.41
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.38
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.9
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.15
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.52
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.26
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.45
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.29
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.13
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.11
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.33
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.68
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.37
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 179.47
265 TestMultiControlPlane/serial/DeployApp 7.26
266 TestMultiControlPlane/serial/PingHostFromPods 2.2
267 TestMultiControlPlane/serial/AddWorkerNode 60.05
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
270 TestMultiControlPlane/serial/CopyFile 20.01
271 TestMultiControlPlane/serial/StopSecondaryNode 12.95
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.14
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.72
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.98
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
278 TestMultiControlPlane/serial/StopCluster 36.31
279 TestMultiControlPlane/serial/RestartCluster 60.96
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
281 TestMultiControlPlane/serial/AddSecondaryNode 51.46
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
287 TestJSONOutput/start/Command 78.6
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.69
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.69
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 1.5
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 41.38
313 TestKicCustomNetwork/use_default_bridge_network 36.88
314 TestKicExistingNetwork 32.73
315 TestKicCustomSubnet 38.23
316 TestKicStaticIP 32.11
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 73.72
321 TestMountStart/serial/StartWithMountFirst 8.38
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.36
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.71
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.01
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 106.7
333 TestMultiNode/serial/DeployApp2Nodes 5.02
334 TestMultiNode/serial/PingHostFrom2Pods 1.03
335 TestMultiNode/serial/AddNode 27.35
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 11.19
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 7.92
341 TestMultiNode/serial/RestartKeepsNodes 78.96
342 TestMultiNode/serial/DeleteNode 5.74
343 TestMultiNode/serial/StopMultiNode 24.06
344 TestMultiNode/serial/RestartMultiNode 54.24
345 TestMultiNode/serial/ValidateNameConflict 35.16
350 TestPreload 120.99
352 TestScheduledStopUnix 105.72
355 TestInsufficientStorage 12.54
356 TestRunningBinaryUpgrade 315.58
359 TestMissingContainerUpgrade 123.17
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 49.39
363 TestNoKubernetes/serial/StartWithStopK8s 25.3
364 TestNoKubernetes/serial/Start 7.67
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
367 TestNoKubernetes/serial/ProfileList 0.88
368 TestNoKubernetes/serial/Stop 2.69
369 TestNoKubernetes/serial/StartNoArgs 7.38
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
371 TestStoppedBinaryUpgrade/Setup 2.05
372 TestStoppedBinaryUpgrade/Upgrade 306.79
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.12
382 TestPause/serial/Start 59.96
383 TestPause/serial/SecondStartNoReconfiguration 6.27
384 TestPause/serial/Pause 0.75
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.64
387 TestPause/serial/PauseAgain 0.89
388 TestPause/serial/DeletePaused 2.77
389 TestPause/serial/VerifyDeletedResources 0.37
397 TestNetworkPlugins/group/false 3.59
453 TestNetworkPlugins/group/auto/Start 79.42
454 TestNetworkPlugins/group/auto/KubeletFlags 0.31
455 TestNetworkPlugins/group/auto/NetCatPod 9.28
456 TestNetworkPlugins/group/auto/DNS 0.18
457 TestNetworkPlugins/group/auto/Localhost 0.15
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 79.22
460 TestNetworkPlugins/group/kindnet/ControllerPod 6
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
462 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
463 TestNetworkPlugins/group/kindnet/DNS 0.17
464 TestNetworkPlugins/group/kindnet/Localhost 0.15
465 TestNetworkPlugins/group/kindnet/HairPin 0.15
466 TestNetworkPlugins/group/calico/Start 61.03
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
468 TestNetworkPlugins/group/calico/KubeletFlags 0.31
469 TestNetworkPlugins/group/calico/NetCatPod 8.27
470 TestNetworkPlugins/group/calico/DNS 0.18
471 TestNetworkPlugins/group/calico/Localhost 0.15
472 TestNetworkPlugins/group/calico/HairPin 0.15
473 TestNetworkPlugins/group/custom-flannel/Start 58.33
474 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
475 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.28
476 TestNetworkPlugins/group/custom-flannel/DNS 0.16
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
479 TestNetworkPlugins/group/enable-default-cni/Start 74.33
480 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
481 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.32
482 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
483 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
484 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
485 TestNetworkPlugins/group/flannel/Start 59.22
487 TestNetworkPlugins/group/flannel/ControllerPod 6.01
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
489 TestNetworkPlugins/group/flannel/NetCatPod 9.25
490 TestNetworkPlugins/group/flannel/DNS 0.16
491 TestNetworkPlugins/group/flannel/Localhost 0.17
492 TestNetworkPlugins/group/flannel/HairPin 0.14
493 TestNetworkPlugins/group/bridge/Start 71.19
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
495 TestNetworkPlugins/group/bridge/NetCatPod 8.28
496 TestNetworkPlugins/group/bridge/DNS 0.16
497 TestNetworkPlugins/group/bridge/Localhost 0.16
498 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (14.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-559491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-559491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.604462205s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1212 19:29:07.558614    4120 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1212 19:29:07.558687    4120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-559491
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-559491: exit status 85 (106.507295ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-559491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-559491 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:28:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:28:52.997433    4125 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:28:52.997612    4125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:52.997643    4125 out.go:374] Setting ErrFile to fd 2...
	I1212 19:28:52.997661    4125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:28:52.997912    4125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	W1212 19:28:52.998069    4125 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22112-2315/.minikube/config/config.json: open /home/jenkins/minikube-integration/22112-2315/.minikube/config/config.json: no such file or directory
	I1212 19:28:52.998535    4125 out.go:368] Setting JSON to true
	I1212 19:28:52.999295    4125 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":682,"bootTime":1765567051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:28:52.999383    4125 start.go:143] virtualization:  
	I1212 19:28:53.005622    4125 out.go:99] [download-only-559491] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1212 19:28:53.005940    4125 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 19:28:53.006086    4125 notify.go:221] Checking for updates...
	I1212 19:28:53.010656    4125 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:28:53.014048    4125 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:28:53.017359    4125 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:28:53.020448    4125 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:28:53.023555    4125 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 19:28:53.029480    4125 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:28:53.029737    4125 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:28:53.054615    4125 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:28:53.054728    4125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:53.463484    4125 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-12 19:28:53.453428809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:28:53.463584    4125 docker.go:319] overlay module found
	I1212 19:28:53.466713    4125 out.go:99] Using the docker driver based on user configuration
	I1212 19:28:53.466753    4125 start.go:309] selected driver: docker
	I1212 19:28:53.466761    4125 start.go:927] validating driver "docker" against <nil>
	I1212 19:28:53.466855    4125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:28:53.527529    4125 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-12 19:28:53.518534185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:28:53.527682    4125 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:28:53.528008    4125 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 19:28:53.528195    4125 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:28:53.531420    4125 out.go:171] Using Docker driver with root privileges
	I1212 19:28:53.534305    4125 cni.go:84] Creating CNI manager for ""
	I1212 19:28:53.534367    4125 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:28:53.534379    4125 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 19:28:53.534457    4125 start.go:353] cluster config:
	{Name:download-only-559491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-559491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:28:53.537479    4125 out.go:99] Starting "download-only-559491" primary control-plane node in "download-only-559491" cluster
	I1212 19:28:53.537501    4125 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:28:53.540289    4125 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:28:53.540339    4125 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1212 19:28:53.540447    4125 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:28:53.555977    4125 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:28:53.556170    4125 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:28:53.556275    4125 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:28:53.593215    4125 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:28:53.593250    4125 cache.go:65] Caching tarball of preloaded images
	I1212 19:28:53.593429    4125 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1212 19:28:53.596813    4125 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1212 19:28:53.596834    4125 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1212 19:28:53.847585    4125 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1212 19:28:53.847716    4125 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:29:02.095013    4125 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	
	
	* The control-plane node download-only-559491 host does not exist
	  To start a cluster, run: "minikube start -p download-only-559491"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-559491
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (12.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-972381 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-972381 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.206785522s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (12.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1212 19:29:20.275092    4120 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1212 19:29:20.275126    4120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-972381
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-972381: exit status 85 (86.197998ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-559491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-559491 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-559491                                                                                                                                                               │ download-only-559491 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-972381 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-972381 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:08
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:08.112122    4324 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:29:08.112302    4324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:08.112334    4324 out.go:374] Setting ErrFile to fd 2...
	I1212 19:29:08.112356    4324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:08.112619    4324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:29:08.113028    4324 out.go:368] Setting JSON to true
	I1212 19:29:08.113791    4324 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":697,"bootTime":1765567051,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:29:08.113880    4324 start.go:143] virtualization:  
	I1212 19:29:08.145534    4324 out.go:99] [download-only-972381] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:29:08.145846    4324 notify.go:221] Checking for updates...
	I1212 19:29:08.173044    4324 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:08.200750    4324 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:29:08.220570    4324 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:29:08.238358    4324 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:29:08.264257    4324 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 19:29:08.304670    4324 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:08.304940    4324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:08.325040    4324 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:29:08.325162    4324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:08.397676    4324 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 19:29:08.388117553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:29:08.397781    4324 docker.go:319] overlay module found
	I1212 19:29:08.412999    4324 out.go:99] Using the docker driver based on user configuration
	I1212 19:29:08.413044    4324 start.go:309] selected driver: docker
	I1212 19:29:08.413057    4324 start.go:927] validating driver "docker" against <nil>
	I1212 19:29:08.413175    4324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:08.469855    4324 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 19:29:08.461502725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:29:08.470017    4324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:08.470300    4324 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 19:29:08.470470    4324 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:08.481295    4324 out.go:171] Using Docker driver with root privileges
	I1212 19:29:08.490479    4324 cni.go:84] Creating CNI manager for ""
	I1212 19:29:08.490540    4324 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:29:08.490551    4324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 19:29:08.490621    4324 start.go:353] cluster config:
	{Name:download-only-972381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-972381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:08.497635    4324 out.go:99] Starting "download-only-972381" primary control-plane node in "download-only-972381" cluster
	I1212 19:29:08.497657    4324 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:29:08.505183    4324 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:29:08.505222    4324 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 19:29:08.505370    4324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:29:08.520809    4324 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:29:08.520941    4324 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:29:08.520962    4324 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 19:29:08.520967    4324 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 19:29:08.520978    4324 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 19:29:08.601513    4324 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1212 19:29:08.601536    4324 cache.go:65] Caching tarball of preloaded images
	I1212 19:29:08.601673    4324 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 19:29:08.633808    4324 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1212 19:29:08.633854    4324 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1212 19:29:08.721980    4324 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1212 19:29:08.722033    4324 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-972381 host does not exist
	  To start a cluster, run: "minikube start -p download-only-972381"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-972381
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (12.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-543142 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-543142 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.96618893s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (12.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1212 19:29:33.676115    4120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 19:29:33.676149    4120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-543142
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-543142: exit status 85 (73.72362ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-559491 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-559491 │ jenkins │ v1.37.0 │ 12 Dec 25 19:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-559491                                                                                                                                                                      │ download-only-559491 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-972381 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-972381 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-972381                                                                                                                                                                      │ download-only-972381 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-543142 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-543142 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:20.755288    4525 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:29:20.755771    4525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:20.755808    4525 out.go:374] Setting ErrFile to fd 2...
	I1212 19:29:20.755854    4525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:20.756384    4525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:29:20.756813    4525 out.go:368] Setting JSON to true
	I1212 19:29:20.757499    4525 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":710,"bootTime":1765567051,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:29:20.757588    4525 start.go:143] virtualization:  
	I1212 19:29:20.760992    4525 out.go:99] [download-only-543142] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:29:20.761223    4525 notify.go:221] Checking for updates...
	I1212 19:29:20.764223    4525 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:20.767264    4525 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:29:20.770141    4525 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:29:20.773016    4525 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:29:20.775723    4525 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 19:29:20.781448    4525 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:20.781726    4525 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:20.802809    4525 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:29:20.802917    4525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:20.871146    4525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 19:29:20.861674208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:29:20.871248    4525 docker.go:319] overlay module found
	I1212 19:29:20.874178    4525 out.go:99] Using the docker driver based on user configuration
	I1212 19:29:20.874213    4525 start.go:309] selected driver: docker
	I1212 19:29:20.874230    4525 start.go:927] validating driver "docker" against <nil>
	I1212 19:29:20.874333    4525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:29:20.928035    4525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 19:29:20.919348529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:29:20.928191    4525 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:20.928467    4525 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 19:29:20.928617    4525 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:20.931733    4525 out.go:171] Using Docker driver with root privileges
	I1212 19:29:20.934408    4525 cni.go:84] Creating CNI manager for ""
	I1212 19:29:20.934468    4525 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 19:29:20.934480    4525 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 19:29:20.934552    4525 start.go:353] cluster config:
	{Name:download-only-543142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-543142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:20.937500    4525 out.go:99] Starting "download-only-543142" primary control-plane node in "download-only-543142" cluster
	I1212 19:29:20.937518    4525 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 19:29:20.940380    4525 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 19:29:20.940432    4525 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:29:20.940590    4525 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 19:29:20.956344    4525 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 19:29:20.956465    4525 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 19:29:20.956490    4525 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 19:29:20.956494    4525 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 19:29:20.956502    4525 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 19:29:20.997789    4525 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 19:29:20.997816    4525 cache.go:65] Caching tarball of preloaded images
	I1212 19:29:20.997998    4525 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 19:29:21.001079    4525 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1212 19:29:21.001100    4525 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1212 19:29:21.106818    4525 preload.go:295] Got checksum from GCS API "4ead9b9dbba82a7ecb06a001f1ffeaf3"
	I1212 19:29:21.106867    4525 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:4ead9b9dbba82a7ecb06a001f1ffeaf3 -> /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-543142 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543142"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-543142
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1212 19:29:34.894987    4120 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-585738 --alsologtostderr --binary-mirror http://127.0.0.1:44373 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-585738" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-585738
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-593103
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-593103: exit status 85 (115.447343ms)

                                                
                                                
-- stdout --
	* Profile "addons-593103" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-593103"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.12s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-593103
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-593103: exit status 85 (94.577456ms)

                                                
                                                
-- stdout --
	* Profile "addons-593103" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-593103"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (167.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-593103 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-593103 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m47.102722707s)
--- PASS: TestAddons/Setup (167.10s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.79s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 48.459939ms
addons_test.go:886: volcano-controller stabilized in 48.717616ms
addons_test.go:878: volcano-admission stabilized in 49.09008ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-lq8jn" [36561e92-3245-4392-a1b2-8de726f09ba3] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004804964s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-x2fmx" [e7c17722-04c0-4607-9fe7-3f1653a7dc34] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003280372s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-v6t2l" [a5f1dd4e-6113-4265-9d45-0cda7be093f3] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002980395s
addons_test.go:905: (dbg) Run:  kubectl --context addons-593103 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-593103 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-593103 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [7214eacc-f2b7-4a29-88c5-4769d911fb63] Pending
helpers_test.go:353: "test-job-nginx-0" [7214eacc-f2b7-4a29-88c5-4769d911fb63] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [7214eacc-f2b7-4a29-88c5-4769d911fb63] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003969174s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable volcano --alsologtostderr -v=1: (12.10886959s)
--- PASS: TestAddons/serial/Volcano (40.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-593103 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-593103 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-593103 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-593103 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cdc09186-d1de-484c-b498-cb3f99093c07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [cdc09186-d1de-484c-b498-cb3f99093c07] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004242178s
addons_test.go:696: (dbg) Run:  kubectl --context addons-593103 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-593103 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-593103 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-593103 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 9.183737ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-54s85" [493f1663-0b91-4ac2-b4e0-e397365ed809] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003154195s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-p28pv" [bc8b9611-f972-49e6-bd62-2d9aeae5a527] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003814903s
addons_test.go:394: (dbg) Run:  kubectl --context addons-593103 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-593103 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-593103 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.299181555s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 ip
2025/12/12 19:33:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.30s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.672919ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-593103
addons_test.go:334: (dbg) Run:  kubectl --context addons-593103 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-593103 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-593103 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-593103 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [fa4acf39-3a06-462f-8ac4-46a206dfcdc8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [fa4acf39-3a06-462f-8ac4-46a206dfcdc8] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.002881711s
I1212 19:34:57.150439    4120 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-593103 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable ingress-dns --alsologtostderr -v=1: (1.31533244s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable ingress --alsologtostderr -v=1: (8.115395155s)
--- PASS: TestAddons/parallel/Ingress (19.37s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-drqpw" [2fbdeaca-e802-4a89-bac7-d403351f3336] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003882483s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable inspektor-gadget --alsologtostderr -v=1: (5.937668171s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.226607ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-fbvrk" [f6fc46ff-f051-4f99-ad6d-39a0a46d6940] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002756831s
addons_test.go:465: (dbg) Run:  kubectl --context addons-593103 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 19:34:04.197145    4120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 19:34:04.200189    4120 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 19:34:04.200219    4120 kapi.go:107] duration metric: took 6.04298ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.053539ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-593103 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-593103 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [4bdf83c3-b9b6-47bf-9335-3cd0edf16427] Pending
helpers_test.go:353: "task-pv-pod" [4bdf83c3-b9b6-47bf-9335-3cd0edf16427] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [4bdf83c3-b9b6-47bf-9335-3cd0edf16427] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004535249s
addons_test.go:574: (dbg) Run:  kubectl --context addons-593103 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-593103 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-593103 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-593103 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-593103 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-593103 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-593103 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [093c7490-dda1-452e-923a-cdab887339d9] Pending
helpers_test.go:353: "task-pv-pod-restore" [093c7490-dda1-452e-923a-cdab887339d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [093c7490-dda1-452e-923a-cdab887339d9] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003627318s
addons_test.go:616: (dbg) Run:  kubectl --context addons-593103 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-593103 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-593103 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable volumesnapshots --alsologtostderr -v=1: (1.238906338s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.975306736s)
--- PASS: TestAddons/parallel/CSI (51.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-593103 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-593103 --alsologtostderr -v=1: (1.033915389s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-jhxbk" [06b7a67b-f429-4c94-8610-6f8b574a9db2] Pending
helpers_test.go:353: "headlamp-dfcdc64b-jhxbk" [06b7a67b-f429-4c94-8610-6f8b574a9db2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-jhxbk" [06b7a67b-f429-4c94-8610-6f8b574a9db2] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003170573s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable headlamp --alsologtostderr -v=1: (5.771700311s)
--- PASS: TestAddons/parallel/Headlamp (16.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-4dth9" [4ef2b9b5-6f6a-4daa-ab6c-29423a2e95bc] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003360595s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.99s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-593103 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-593103 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-593103 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [aafe52b0-0794-4d72-874e-615ca6185c06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [aafe52b0-0794-4d72-874e-615ca6185c06] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [aafe52b0-0794-4d72-874e-615ca6185c06] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003273786s
addons_test.go:969: (dbg) Run:  kubectl --context addons-593103 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 ssh "cat /opt/local-path-provisioner/pvc-5511c305-a107-4678-8a2f-1c7ab975f118_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-593103 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-593103 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.152109098s)
--- PASS: TestAddons/parallel/LocalPath (51.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-9qv4s" [6f1af0db-6cd2-4436-a91a-91148e4665ca] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004224873s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-lwmms" [92c8b060-341c-40f9-92d2-1545e2cee4bc] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003818191s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-593103 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-593103 addons disable yakd --alsologtostderr -v=1: (5.963354551s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-593103
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-593103: (12.072556916s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-593103
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-593103
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-593103
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (40.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-045902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-045902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.606186146s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-045902 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-045902 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-045902 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-045902" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-045902
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-045902: (2.099098092s)
--- PASS: TestCertOptions (40.50s)

                                                
                                    
x
+
TestCertExpiration (232s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-685020 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-685020 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.667565697s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-685020 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-685020 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.517064716s)
helpers_test.go:176: Cleaning up "cert-expiration-685020" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-685020
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-685020: (2.813297851s)
--- PASS: TestCertExpiration (232.00s)

                                                
                                    
x
+
TestForceSystemdFlag (33.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-587199 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-587199 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.442749888s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-587199 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-587199" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-587199
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-587199: (2.022847437s)
--- PASS: TestForceSystemdFlag (33.79s)

                                                
                                    
x
+
TestForceSystemdEnv (39.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-557154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1212 20:52:22.896289    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-557154 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.316581465s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-557154 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-557154" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-557154
E1212 20:52:48.857307    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-557154: (2.44389541s)
--- PASS: TestForceSystemdEnv (39.10s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.56s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-772922 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-772922 --driver=docker  --container-runtime=containerd: (30.826288808s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-772922"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-772922": (1.063755397s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-STG7lklP5TIQ/agent.23868" SSH_AGENT_PID="23869" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-STG7lklP5TIQ/agent.23868" SSH_AGENT_PID="23869" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-STG7lklP5TIQ/agent.23868" SSH_AGENT_PID="23869" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.272345784s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-STG7lklP5TIQ/agent.23868" SSH_AGENT_PID="23869" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-772922" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-772922
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-772922: (2.039216509s)
--- PASS: TestDockerEnvContainerd (46.56s)

                                                
                                    
x
+
TestErrorSpam/setup (31.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-643621 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643621 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-643621 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643621 --driver=docker  --container-runtime=containerd: (31.369263199s)
--- PASS: TestErrorSpam/setup (31.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 stop: (2.045215197s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643621 --log_dir /tmp/nospam-643621 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-008271 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1212 19:37:22.904018    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:22.910741    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:22.922074    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:22.943378    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:22.984684    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:23.066023    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:23.227444    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:23.549063    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:24.191021    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:25.472325    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:28.033611    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:33.155573    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:43.396896    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:38:03.878937    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-008271 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m20.95745656s)
--- PASS: TestFunctional/serial/StartWithProxy (80.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 19:38:17.320954    4120 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-008271 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-008271 --alsologtostderr -v=8: (7.186903928s)
functional_test.go:678: soft start took 7.190236787s for "functional-008271" cluster.
I1212 19:38:24.508223    4120 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-008271 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 cache add registry.k8s.io/pause:3.1: (1.274208992s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 cache add registry.k8s.io/pause:3.3: (1.230415242s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 cache add registry.k8s.io/pause:latest: (1.067207474s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-008271 /tmp/TestFunctionalserialCacheCmdcacheadd_local2644626400/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cache add minikube-local-cache-test:functional-008271
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cache delete minikube-local-cache-test:functional-008271
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-008271
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.799399ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 kubectl -- --context functional-008271 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-008271 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (69.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-008271 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 19:38:44.840336    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-008271 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m9.353782886s)
functional_test.go:776: restart took 1m9.353876807s for "functional-008271" cluster.
I1212 19:39:41.613609    4120 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (69.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-008271 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 logs: (1.586240032s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 logs --file /tmp/TestFunctionalserialLogsFileCmd2036157906/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 logs --file /tmp/TestFunctionalserialLogsFileCmd2036157906/001/logs.txt: (1.446309568s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-008271 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-008271
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-008271: exit status 115 (792.796251ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32527 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-008271 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-008271 delete -f testdata/invalidsvc.yaml: (1.216515798s)
--- PASS: TestFunctional/serial/InvalidService (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 config get cpus: exit status 14 (102.765455ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 config get cpus: exit status 14 (56.291274ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-008271 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-008271 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 39166: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-008271 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-008271 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (220.787327ms)

                                                
                                                
-- stdout --
	* [functional-008271] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:40:20.689193   38872 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:40:20.689361   38872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:20.689371   38872 out.go:374] Setting ErrFile to fd 2...
	I1212 19:40:20.689377   38872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:20.689633   38872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:40:20.689988   38872 out.go:368] Setting JSON to false
	I1212 19:40:20.690873   38872 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1370,"bootTime":1765567051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:40:20.690945   38872 start.go:143] virtualization:  
	I1212 19:40:20.694173   38872 out.go:179] * [functional-008271] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 19:40:20.699361   38872 notify.go:221] Checking for updates...
	I1212 19:40:20.702440   38872 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:40:20.705327   38872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:40:20.708487   38872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:40:20.711436   38872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:40:20.714325   38872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:40:20.717169   38872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:40:20.720594   38872 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 19:40:20.721244   38872 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:40:20.760095   38872 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:40:20.760279   38872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:40:20.839307   38872 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 19:40:20.828641004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:40:20.839422   38872 docker.go:319] overlay module found
	I1212 19:40:20.842525   38872 out.go:179] * Using the docker driver based on existing profile
	I1212 19:40:20.845407   38872 start.go:309] selected driver: docker
	I1212 19:40:20.845444   38872 start.go:927] validating driver "docker" against &{Name:functional-008271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-008271 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:40:20.845543   38872 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:40:20.849079   38872 out.go:203] 
	W1212 19:40:20.852098   38872 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 19:40:20.855736   38872 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-008271 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-008271 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-008271 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (195.008212ms)

                                                
                                                
-- stdout --
	* [functional-008271] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:40:20.500567   38825 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:40:20.500740   38825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:20.500771   38825 out.go:374] Setting ErrFile to fd 2...
	I1212 19:40:20.500793   38825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:40:20.501889   38825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 19:40:20.502369   38825 out.go:368] Setting JSON to false
	I1212 19:40:20.503374   38825 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1370,"bootTime":1765567051,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 19:40:20.503482   38825 start.go:143] virtualization:  
	I1212 19:40:20.507100   38825 out.go:179] * [functional-008271] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 19:40:20.511091   38825 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:40:20.511175   38825 notify.go:221] Checking for updates...
	I1212 19:40:20.514168   38825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:40:20.517011   38825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 19:40:20.519905   38825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 19:40:20.522818   38825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 19:40:20.525686   38825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:40:20.529074   38825 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 19:40:20.529666   38825 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:40:20.557395   38825 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 19:40:20.557548   38825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 19:40:20.620942   38825 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 19:40:20.611551673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 19:40:20.621055   38825 docker.go:319] overlay module found
	I1212 19:40:20.624356   38825 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 19:40:20.627263   38825 start.go:309] selected driver: docker
	I1212 19:40:20.627280   38825 start.go:927] validating driver "docker" against &{Name:functional-008271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-008271 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:40:20.627425   38825 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:40:20.631239   38825 out.go:203] 
	W1212 19:40:20.634109   38825 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 19:40:20.636962   38825 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-008271 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-008271 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-fsfnp" [d5392df9-0aef-4ef7-9b66-495b86d92f39] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-fsfnp" [d5392df9-0aef-4ef7-9b66-495b86d92f39] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005529499s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30910
functional_test.go:1680: http://192.168.49.2:30910: success! body:
Request served by hello-node-connect-7d85dfc575-fsfnp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30910
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [1e2f7cca-6274-4ba5-ba13-c593bd1eebef] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003833657s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-008271 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-008271 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-008271 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-008271 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [266a1961-f6fe-48d5-93bf-53dafc9de5c9] Pending
helpers_test.go:353: "sp-pod" [266a1961-f6fe-48d5-93bf-53dafc9de5c9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [266a1961-f6fe-48d5-93bf-53dafc9de5c9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003383921s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-008271 exec sp-pod -- touch /tmp/mount/foo
E1212 19:40:06.761815    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-008271 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-008271 delete -f testdata/storage-provisioner/pod.yaml: (1.231164195s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-008271 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6af8a134-5238-40bb-9649-cfb38643f7d8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [6af8a134-5238-40bb-9649-cfb38643f7d8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003282504s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-008271 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh -n functional-008271 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cp functional-008271:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1078471305/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh -n functional-008271 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh -n functional-008271 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4120/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /etc/test/nested/copy/4120/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /etc/ssl/certs/4120.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /usr/share/ca-certificates/4120.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /etc/ssl/certs/41202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /usr/share/ca-certificates/41202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-008271 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh "sudo systemctl is-active docker": exit status 1 (333.758716ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh "sudo systemctl is-active crio": exit status 1 (295.437856ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-008271 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-008271 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-008271 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-008271 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 36479: os: process already finished
helpers_test.go:526: unable to kill pid 36274: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-008271 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-008271 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [eb8a20ef-bbd1-432e-a7fa-1aed712ef2c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [eb8a20ef-bbd1-432e-a7fa-1aed712ef2c7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004572993s
I1212 19:40:00.338226    4120 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-008271 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.125.27 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-008271 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-008271 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-008271 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-zrvsw" [1af05355-5613-4d58-88c7-13c98a0ec5fd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-zrvsw" [1af05355-5613-4d58-88c7-13c98a0ec5fd] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.007530972s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "383.926031ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.062298ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "375.456738ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "67.324431ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdany-port3433151484/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765568416701101724" to /tmp/TestFunctionalparallelMountCmdany-port3433151484/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765568416701101724" to /tmp/TestFunctionalparallelMountCmdany-port3433151484/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765568416701101724" to /tmp/TestFunctionalparallelMountCmdany-port3433151484/001/test-1765568416701101724
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (524.412666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:40:17.227377    4120 retry.go:31] will retry after 332.929808ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 19:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 19:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 19:40 test-1765568416701101724
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh cat /mount-9p/test-1765568416701101724
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-008271 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [e907b82f-1539-4b57-a29e-3b974d2a0f31] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [e907b82f-1539-4b57-a29e-3b974d2a0f31] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [e907b82f-1539-4b57-a29e-3b974d2a0f31] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003867781s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-008271 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdany-port3433151484/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 service list -o json
functional_test.go:1504: Took "518.842229ms" to run "out/minikube-linux-arm64 -p functional-008271 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31350
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31350
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdspecific-port1214614555/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (591.232616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:40:25.832464    4120 retry.go:31] will retry after 619.013922ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdspecific-port1214614555/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh "sudo umount -f /mount-9p": exit status 1 (350.627736ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-008271 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdspecific-port1214614555/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3555062239/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3555062239/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3555062239/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T" /mount1: exit status 1 (930.894893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:40:28.669885    4120 retry.go:31] will retry after 636.75299ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T" /mount2
2025/12/12 19:40:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-008271 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3555062239/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3555062239/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-008271 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3555062239/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 version -o=json --components: (1.30523281s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-008271 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-008271
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-008271
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-008271 image ls --format short --alsologtostderr:
I1212 19:40:37.189416   41919 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:37.189622   41919 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.189636   41919 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:37.189643   41919 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.189925   41919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 19:40:37.190552   41919 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.190664   41919 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.191208   41919 cli_runner.go:164] Run: docker container inspect functional-008271 --format={{.State.Status}}
I1212 19:40:37.207974   41919 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:37.208074   41919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-008271
I1212 19:40:37.241216   41919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-008271/id_rsa Username:docker}
I1212 19:40:37.370874   41919 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-008271 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ docker.io/library/minikube-local-cache-test │ functional-008271  │ sha256:5661f3 │ 990B   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-008271  │ sha256:ce2d2c │ 2.17MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:10afed │ 23MB   │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-008271 image ls --format table --alsologtostderr:
I1212 19:40:37.490575   41998 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:37.491357   41998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.491401   41998 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:37.491423   41998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.492273   41998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 19:40:37.494578   41998 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.494823   41998 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.496418   41998 cli_runner.go:164] Run: docker container inspect functional-008271 --format={{.State.Status}}
I1212 19:40:37.519173   41998 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:37.519227   41998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-008271
I1212 19:40:37.543538   41998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-008271/id_rsa Username:docker}
I1212 19:40:37.658903   41998 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-008271 image ls --format json --alsologtostderr:
[{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags
":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:3d18732f8686cc3c878
055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:5661f32bede572b676872cc804975f90ff6296cfb902f98dcfd0a018d5cab590","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-008271"],"size":"990"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a460
4f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-008271"],"size":"2173567"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:20b332c9a
70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-008271 image ls --format json --alsologtostderr:
I1212 19:40:37.477250   41993 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:37.477402   41993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.477413   41993 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:37.477418   41993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.477680   41993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 19:40:37.478377   41993 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.478497   41993 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.479098   41993 cli_runner.go:164] Run: docker container inspect functional-008271 --format={{.State.Status}}
I1212 19:40:37.501217   41993 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:37.501281   41993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-008271
I1212 19:40:37.526282   41993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-008271/id_rsa Username:docker}
I1212 19:40:37.646672   41993 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-008271 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-008271
size: "2173567"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:5661f32bede572b676872cc804975f90ff6296cfb902f98dcfd0a018d5cab590
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-008271
size: "990"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-008271 image ls --format yaml --alsologtostderr:
I1212 19:40:37.192378   41920 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:37.192511   41920 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.192573   41920 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:37.192594   41920 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:37.192856   41920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 19:40:37.193507   41920 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.193663   41920 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:37.194200   41920 cli_runner.go:164] Run: docker container inspect functional-008271 --format={{.State.Status}}
I1212 19:40:37.241501   41920 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:37.241556   41920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-008271
I1212 19:40:37.265734   41920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-008271/id_rsa Username:docker}
I1212 19:40:37.371555   41920 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-008271 ssh pgrep buildkitd: exit status 1 (274.585259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr: (3.297540431s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr:
I1212 19:40:38.013912   42127 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:38.014544   42127 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:38.014584   42127 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:38.014607   42127 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:38.015102   42127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 19:40:38.016062   42127 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:38.018946   42127 config.go:182] Loaded profile config "functional-008271": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 19:40:38.019611   42127 cli_runner.go:164] Run: docker container inspect functional-008271 --format={{.State.Status}}
I1212 19:40:38.039209   42127 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:38.039273   42127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-008271
I1212 19:40:38.059544   42127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-008271/id_rsa Username:docker}
I1212 19:40:38.166744   42127 build_images.go:162] Building image from path: /tmp/build.2164849540.tar
I1212 19:40:38.166811   42127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 19:40:38.175522   42127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2164849540.tar
I1212 19:40:38.179131   42127 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2164849540.tar: stat -c "%s %y" /var/lib/minikube/build/build.2164849540.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2164849540.tar': No such file or directory
I1212 19:40:38.179164   42127 ssh_runner.go:362] scp /tmp/build.2164849540.tar --> /var/lib/minikube/build/build.2164849540.tar (3072 bytes)
I1212 19:40:38.198647   42127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2164849540
I1212 19:40:38.206830   42127 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2164849540 -xf /var/lib/minikube/build/build.2164849540.tar
I1212 19:40:38.215703   42127 containerd.go:394] Building image: /var/lib/minikube/build/build.2164849540
I1212 19:40:38.215793   42127 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2164849540 --local dockerfile=/var/lib/minikube/build/build.2164849540 --output type=image,name=localhost/my-image:functional-008271
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a13f4e86a2287168f28a70cf576d8be4cebdcfbfeec2c7b3cb4e9424c2ca9b57 0.0s done
#8 exporting config sha256:16360f97873068815f4504816d069d284d1b0c79602c7b229a6304b860b64c67 0.0s done
#8 naming to localhost/my-image:functional-008271 done
#8 DONE 0.2s
I1212 19:40:41.234259   42127 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2164849540 --local dockerfile=/var/lib/minikube/build/build.2164849540 --output type=image,name=localhost/my-image:functional-008271: (3.018440397s)
I1212 19:40:41.234328   42127 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2164849540
I1212 19:40:41.242162   42127 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2164849540.tar
I1212 19:40:41.249644   42127 build_images.go:218] Built localhost/my-image:functional-008271 from /tmp/build.2164849540.tar
I1212 19:40:41.249679   42127 build_images.go:134] succeeded building to: functional-008271
I1212 19:40:41.249685   42127 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-008271
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr: (1.106863725s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr: (1.019488445s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-008271
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image save kicbase/echo-server:functional-008271 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image rm kicbase/echo-server:functional-008271 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-008271
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-008271 image save --daemon kicbase/echo-server:functional-008271 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-008271
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-008271
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-008271
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-008271
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-384006 cache add registry.k8s.io/pause:3.1: (1.185547137s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-384006 cache add registry.k8s.io/pause:3.3: (1.117799751s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-384006 cache add registry.k8s.io/pause:latest: (1.093761496s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2469539443/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cache add minikube-local-cache-test:functional-384006
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cache delete minikube-local-cache-test:functional-384006
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-384006
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.21455ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-384006 logs: (1.070430661s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2323179820/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 config get cpus: exit status 14 (64.092365ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 config get cpus: exit status 14 (67.290716ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (179.433514ms)

                                                
                                                
-- stdout --
	* [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:10:13.454088   71588 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:13.454226   71588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.454238   71588 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:13.454257   71588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.454556   71588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:10:13.455009   71588 out.go:368] Setting JSON to false
	I1212 20:10:13.455831   71588 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3163,"bootTime":1765567051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:10:13.455937   71588 start.go:143] virtualization:  
	I1212 20:10:13.459193   71588 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:10:13.462080   71588 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:13.462230   71588 notify.go:221] Checking for updates...
	I1212 20:10:13.467831   71588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:13.470672   71588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:10:13.473625   71588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:10:13.476653   71588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:10:13.479615   71588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:13.483285   71588 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:13.484169   71588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:13.505799   71588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:10:13.505922   71588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.561867   71588 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.551128134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.561972   71588 docker.go:319] overlay module found
	I1212 20:10:13.565028   71588 out.go:179] * Using the docker driver based on existing profile
	I1212 20:10:13.567807   71588 start.go:309] selected driver: docker
	I1212 20:10:13.567826   71588 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.568018   71588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:13.571530   71588 out.go:203] 
	W1212 20:10:13.574654   71588 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 20:10:13.577663   71588 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-384006 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-384006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (227.763728ms)

                                                
                                                
-- stdout --
	* [functional-384006] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:10:13.260795   71534 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:10:13.260973   71534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.261000   71534 out.go:374] Setting ErrFile to fd 2...
	I1212 20:10:13.261013   71534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:10:13.261489   71534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:10:13.262016   71534 out.go:368] Setting JSON to false
	I1212 20:10:13.262967   71534 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3163,"bootTime":1765567051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:10:13.263043   71534 start.go:143] virtualization:  
	I1212 20:10:13.266465   71534 out.go:179] * [functional-384006] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 20:10:13.270426   71534 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:10:13.270495   71534 notify.go:221] Checking for updates...
	I1212 20:10:13.276152   71534 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:10:13.279314   71534 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:10:13.282315   71534 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:10:13.285226   71534 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:10:13.288161   71534 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:10:13.291600   71534 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:10:13.292205   71534 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:10:13.323047   71534 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:10:13.323160   71534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:10:13.381579   71534 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:10:13.372059058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:10:13.381701   71534 docker.go:319] overlay module found
	I1212 20:10:13.384892   71534 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 20:10:13.387709   71534 start.go:309] selected driver: docker
	I1212 20:10:13.387755   71534 start.go:927] validating driver "docker" against &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:13.387907   71534 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:10:13.391443   71534 out.go:203] 
	W1212 20:10:13.394422   71534 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:10:13.397309   71534 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh -n functional-384006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cp functional-384006:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3208660650/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh -n functional-384006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh -n functional-384006 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4120/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /etc/test/nested/copy/4120/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /etc/ssl/certs/4120.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /usr/share/ca-certificates/4120.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /etc/ssl/certs/41202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /usr/share/ca-certificates/41202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "sudo systemctl is-active docker": exit status 1 (278.630747ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "sudo systemctl is-active crio": exit status 1 (291.221523ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-384006 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "334.143601ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "72.423803ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "325.928091ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.035572ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo34640341/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.407486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:10:06.409408    4120 retry.go:31] will retry after 498.952142ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo34640341/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "sudo umount -f /mount-9p": exit status 1 (265.804515ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-384006 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo34640341/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T" /mount1: exit status 1 (533.03748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:10:08.501011    4120 retry.go:31] will retry after 700.174541ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-384006 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-384006 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4202071176/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 version -o=json --components
E1212 20:10:25.974025    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-384006 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-384006
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-384006
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-384006 image ls --format short --alsologtostderr:
I1212 20:10:26.223604   73769 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:26.223759   73769 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:26.223773   73769 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:26.223786   73769 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:26.224243   73769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:26.225894   73769 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:26.226052   73769 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:26.226580   73769 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:26.244494   73769 ssh_runner.go:195] Run: systemctl --version
I1212 20:10:26.244550   73769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:26.261563   73769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 20:10:26.366611   73769 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-384006 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-384006  │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ localhost/my-image                          │ functional-384006  │ sha256:06f44f │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-384006  │ sha256:5661f3 │ 990B   │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-384006 image ls --format table --alsologtostderr:
I1212 20:10:30.551057   74208 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:30.551603   74208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:30.551619   74208 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:30.551626   74208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:30.551964   74208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:30.552641   74208 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:30.552768   74208 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:30.553276   74208 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:30.570431   74208 ssh_runner.go:195] Run: systemctl --version
I1212 20:10:30.570494   74208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:30.587495   74208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 20:10:30.690320   74208 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-384006 image ls --format json --alsologtostderr:
[{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:06f44feca4d5243f8148827aa41f020f045feda1157d8654198a92aa82e6a6d1","repoDigests":[],"repoTags":["localhost/my-image:functional-384006"],"size":"830603"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-ser
ver:functional-384006"],"size":"2173567"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":[
"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:5661f32bede572b676872cc804975f90ff6296cfb902
f98dcfd0a018d5cab590","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-384006"],"size":"990"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-384006 image ls --format json --alsologtostderr:
I1212 20:10:30.302759   74166 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:30.302886   74166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:30.302897   74166 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:30.302902   74166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:30.303157   74166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:30.303767   74166 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:30.303929   74166 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:30.304483   74166 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:30.321305   74166 ssh_runner.go:195] Run: systemctl --version
I1212 20:10:30.321369   74166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:30.337842   74166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 20:10:30.442815   74166 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-384006 image ls --format yaml --alsologtostderr:
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-384006
size: "2173567"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:06f44feca4d5243f8148827aa41f020f045feda1157d8654198a92aa82e6a6d1
repoDigests: []
repoTags:
- localhost/my-image:functional-384006
size: "830603"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:5661f32bede572b676872cc804975f90ff6296cfb902f98dcfd0a018d5cab590
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-384006
size: "990"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-384006 image ls --format yaml --alsologtostderr:
I1212 20:10:30.071811   74129 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:30.071983   74129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:30.071990   74129 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:30.071996   74129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:30.072419   74129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:30.073452   74129 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:30.073593   74129 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:30.074954   74129 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:30.096663   74129 ssh_runner.go:195] Run: systemctl --version
I1212 20:10:30.096774   74129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:30.116054   74129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 20:10:30.222804   74129 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-384006 ssh pgrep buildkitd: exit status 1 (264.550452ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image build -t localhost/my-image:functional-384006 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-384006 image build -t localhost/my-image:functional-384006 testdata/build --alsologtostderr: (2.961062372s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-384006 image build -t localhost/my-image:functional-384006 testdata/build --alsologtostderr:
I1212 20:10:26.863144   73916 out.go:360] Setting OutFile to fd 1 ...
I1212 20:10:26.863251   73916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:26.863257   73916 out.go:374] Setting ErrFile to fd 2...
I1212 20:10:26.863262   73916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:10:26.863609   73916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 20:10:26.864645   73916 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:26.865641   73916 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 20:10:26.866379   73916 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 20:10:26.884103   73916 ssh_runner.go:195] Run: systemctl --version
I1212 20:10:26.884156   73916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 20:10:26.902157   73916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 20:10:27.013267   73916 build_images.go:162] Building image from path: /tmp/build.3704193878.tar
I1212 20:10:27.013362   73916 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 20:10:27.022114   73916 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3704193878.tar
I1212 20:10:27.026370   73916 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3704193878.tar: stat -c "%s %y" /var/lib/minikube/build/build.3704193878.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3704193878.tar': No such file or directory
I1212 20:10:27.026397   73916 ssh_runner.go:362] scp /tmp/build.3704193878.tar --> /var/lib/minikube/build/build.3704193878.tar (3072 bytes)
I1212 20:10:27.044349   73916 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3704193878
I1212 20:10:27.051912   73916 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3704193878 -xf /var/lib/minikube/build/build.3704193878.tar
I1212 20:10:27.059612   73916 containerd.go:394] Building image: /var/lib/minikube/build/build.3704193878
I1212 20:10:27.059685   73916 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3704193878 --local dockerfile=/var/lib/minikube/build/build.3704193878 --output type=image,name=localhost/my-image:functional-384006
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:069ac9e2ed92765ca69583ab37d8ce5ae3c2237ad370be179ee4695f4547a884 0.0s done
#8 exporting config sha256:06f44feca4d5243f8148827aa41f020f045feda1157d8654198a92aa82e6a6d1 0.0s done
#8 naming to localhost/my-image:functional-384006 done
#8 DONE 0.2s
I1212 20:10:29.740400   73916 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3704193878 --local dockerfile=/var/lib/minikube/build/build.3704193878 --output type=image,name=localhost/my-image:functional-384006: (2.680687352s)
I1212 20:10:29.740465   73916 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3704193878
I1212 20:10:29.749142   73916 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3704193878.tar
I1212 20:10:29.763274   73916 build_images.go:218] Built localhost/my-image:functional-384006 from /tmp/build.3704193878.tar
I1212 20:10:29.763305   73916 build_images.go:134] succeeded building to: functional-384006
I1212 20:10:29.763310   73916 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-384006
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image load --daemon kicbase/echo-server:functional-384006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image load --daemon kicbase/echo-server:functional-384006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-384006
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image load --daemon kicbase/echo-server:functional-384006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image save kicbase/echo-server:functional-384006 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image rm kicbase/echo-server:functional-384006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-384006
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 image save --daemon kicbase/echo-server:functional-384006 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-384006
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-384006 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-384006
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-384006
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-384006
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1212 20:12:22.896608    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:48.860279    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:48.866578    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:48.877902    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:48.899380    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:48.940704    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:49.022064    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:49.183368    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:49.504703    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:50.146651    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:51.428041    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:53.989326    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:59.111486    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:13:09.353662    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:13:29.835264    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:14:10.797221    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m58.567413105s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
E1212 20:14:51.907258    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (179.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 kubectl -- rollout status deployment/busybox: (4.347841528s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-27khb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-kqp8g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-xngs4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-27khb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-kqp8g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-xngs4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-27khb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-kqp8g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-xngs4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-27khb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-27khb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-kqp8g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-kqp8g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-xngs4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 kubectl -- exec busybox-7b57f96db7-xngs4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node add --alsologtostderr -v 5
E1212 20:15:32.719000    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 node add --alsologtostderr -v 5: (58.903824212s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5: (1.144675663s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-069327 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.095817556s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 status --output json --alsologtostderr -v 5: (1.032357805s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp testdata/cp-test.txt ha-069327:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2190775439/001/cp-test_ha-069327.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327:/home/docker/cp-test.txt ha-069327-m02:/home/docker/cp-test_ha-069327_ha-069327-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test_ha-069327_ha-069327-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327:/home/docker/cp-test.txt ha-069327-m03:/home/docker/cp-test_ha-069327_ha-069327-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test_ha-069327_ha-069327-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327:/home/docker/cp-test.txt ha-069327-m04:/home/docker/cp-test_ha-069327_ha-069327-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test_ha-069327_ha-069327-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp testdata/cp-test.txt ha-069327-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2190775439/001/cp-test_ha-069327-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m02:/home/docker/cp-test.txt ha-069327:/home/docker/cp-test_ha-069327-m02_ha-069327.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test_ha-069327-m02_ha-069327.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m02:/home/docker/cp-test.txt ha-069327-m03:/home/docker/cp-test_ha-069327-m02_ha-069327-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test_ha-069327-m02_ha-069327-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m02:/home/docker/cp-test.txt ha-069327-m04:/home/docker/cp-test_ha-069327-m02_ha-069327-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test_ha-069327-m02_ha-069327-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp testdata/cp-test.txt ha-069327-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2190775439/001/cp-test_ha-069327-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m03:/home/docker/cp-test.txt ha-069327:/home/docker/cp-test_ha-069327-m03_ha-069327.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test_ha-069327-m03_ha-069327.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m03:/home/docker/cp-test.txt ha-069327-m02:/home/docker/cp-test_ha-069327-m03_ha-069327-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test_ha-069327-m03_ha-069327-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m03:/home/docker/cp-test.txt ha-069327-m04:/home/docker/cp-test_ha-069327-m03_ha-069327-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test_ha-069327-m03_ha-069327-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp testdata/cp-test.txt ha-069327-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2190775439/001/cp-test_ha-069327-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m04:/home/docker/cp-test.txt ha-069327:/home/docker/cp-test_ha-069327-m04_ha-069327.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327 "sudo cat /home/docker/cp-test_ha-069327-m04_ha-069327.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m04:/home/docker/cp-test.txt ha-069327-m02:/home/docker/cp-test_ha-069327-m04_ha-069327-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m02 "sudo cat /home/docker/cp-test_ha-069327-m04_ha-069327-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 cp ha-069327-m04:/home/docker/cp-test.txt ha-069327-m03:/home/docker/cp-test_ha-069327-m04_ha-069327-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 ssh -n ha-069327-m03 "sudo cat /home/docker/cp-test_ha-069327-m04_ha-069327-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 node stop m02 --alsologtostderr -v 5: (12.14793643s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5: exit status 7 (799.833627ms)

                                                
                                                
-- stdout --
	ha-069327
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-069327-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-069327-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-069327-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:16:35.336110   91359 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:16:35.336290   91359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:16:35.336319   91359 out.go:374] Setting ErrFile to fd 2...
	I1212 20:16:35.336341   91359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:16:35.336786   91359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:16:35.338187   91359 out.go:368] Setting JSON to false
	I1212 20:16:35.338220   91359 mustload.go:66] Loading cluster: ha-069327
	I1212 20:16:35.338489   91359 notify.go:221] Checking for updates...
	I1212 20:16:35.338689   91359 config.go:182] Loaded profile config "ha-069327": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:16:35.338713   91359 status.go:174] checking status of ha-069327 ...
	I1212 20:16:35.339535   91359 cli_runner.go:164] Run: docker container inspect ha-069327 --format={{.State.Status}}
	I1212 20:16:35.359668   91359 status.go:371] ha-069327 host status = "Running" (err=<nil>)
	I1212 20:16:35.359695   91359 host.go:66] Checking if "ha-069327" exists ...
	I1212 20:16:35.360108   91359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-069327
	I1212 20:16:35.377248   91359 host.go:66] Checking if "ha-069327" exists ...
	I1212 20:16:35.377556   91359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:16:35.377667   91359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-069327
	I1212 20:16:35.405315   91359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/ha-069327/id_rsa Username:docker}
	I1212 20:16:35.513683   91359 ssh_runner.go:195] Run: systemctl --version
	I1212 20:16:35.521272   91359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:16:35.533905   91359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:16:35.616267   91359 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-12 20:16:35.607227542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:16:35.616816   91359 kubeconfig.go:125] found "ha-069327" server: "https://192.168.49.254:8443"
	I1212 20:16:35.616849   91359 api_server.go:166] Checking apiserver status ...
	I1212 20:16:35.616894   91359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:16:35.630381   91359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1417/cgroup
	I1212 20:16:35.638917   91359 api_server.go:182] apiserver freezer: "11:freezer:/docker/1baaf1668d2acd8622a49bac1ba511acbb6acd5ba8cd039d6945d83c5ce3322f/kubepods/burstable/podfffa6fbcd1a554cb01695a4569ff64e1/d626637cc48fca68307b45ffc21f2644991d966e17fc7d37494a4aa69b98bab1"
	I1212 20:16:35.638982   91359 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1baaf1668d2acd8622a49bac1ba511acbb6acd5ba8cd039d6945d83c5ce3322f/kubepods/burstable/podfffa6fbcd1a554cb01695a4569ff64e1/d626637cc48fca68307b45ffc21f2644991d966e17fc7d37494a4aa69b98bab1/freezer.state
	I1212 20:16:35.646548   91359 api_server.go:204] freezer state: "THAWED"
	I1212 20:16:35.646576   91359 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 20:16:35.654686   91359 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 20:16:35.654710   91359 status.go:463] ha-069327 apiserver status = Running (err=<nil>)
	I1212 20:16:35.654720   91359 status.go:176] ha-069327 status: &{Name:ha-069327 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:16:35.654742   91359 status.go:174] checking status of ha-069327-m02 ...
	I1212 20:16:35.655056   91359 cli_runner.go:164] Run: docker container inspect ha-069327-m02 --format={{.State.Status}}
	I1212 20:16:35.673474   91359 status.go:371] ha-069327-m02 host status = "Stopped" (err=<nil>)
	I1212 20:16:35.673507   91359 status.go:384] host is not running, skipping remaining checks
	I1212 20:16:35.673514   91359 status.go:176] ha-069327-m02 status: &{Name:ha-069327-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:16:35.673534   91359 status.go:174] checking status of ha-069327-m03 ...
	I1212 20:16:35.673836   91359 cli_runner.go:164] Run: docker container inspect ha-069327-m03 --format={{.State.Status}}
	I1212 20:16:35.690554   91359 status.go:371] ha-069327-m03 host status = "Running" (err=<nil>)
	I1212 20:16:35.690575   91359 host.go:66] Checking if "ha-069327-m03" exists ...
	I1212 20:16:35.690866   91359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-069327-m03
	I1212 20:16:35.708506   91359 host.go:66] Checking if "ha-069327-m03" exists ...
	I1212 20:16:35.708807   91359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:16:35.708860   91359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-069327-m03
	I1212 20:16:35.729942   91359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/ha-069327-m03/id_rsa Username:docker}
	I1212 20:16:35.845472   91359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:16:35.858839   91359 kubeconfig.go:125] found "ha-069327" server: "https://192.168.49.254:8443"
	I1212 20:16:35.858915   91359 api_server.go:166] Checking apiserver status ...
	I1212 20:16:35.858997   91359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:16:35.871122   91359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	I1212 20:16:35.879828   91359 api_server.go:182] apiserver freezer: "11:freezer:/docker/15a48d988423782ab9138913d08427bdfa2b0387279135428c19c2ba54babfda/kubepods/burstable/podf02866f4aaf649a505174233661fe9c4/0fdf03b59b8ebaa773d1cd83b7de596a2bd6435f6bc76b45b824cccda5a787a6"
	I1212 20:16:35.879975   91359 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15a48d988423782ab9138913d08427bdfa2b0387279135428c19c2ba54babfda/kubepods/burstable/podf02866f4aaf649a505174233661fe9c4/0fdf03b59b8ebaa773d1cd83b7de596a2bd6435f6bc76b45b824cccda5a787a6/freezer.state
	I1212 20:16:35.888080   91359 api_server.go:204] freezer state: "THAWED"
	I1212 20:16:35.888109   91359 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 20:16:35.896155   91359 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 20:16:35.896182   91359 status.go:463] ha-069327-m03 apiserver status = Running (err=<nil>)
	I1212 20:16:35.896191   91359 status.go:176] ha-069327-m03 status: &{Name:ha-069327-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:16:35.896206   91359 status.go:174] checking status of ha-069327-m04 ...
	I1212 20:16:35.896538   91359 cli_runner.go:164] Run: docker container inspect ha-069327-m04 --format={{.State.Status}}
	I1212 20:16:35.918021   91359 status.go:371] ha-069327-m04 host status = "Running" (err=<nil>)
	I1212 20:16:35.918044   91359 host.go:66] Checking if "ha-069327-m04" exists ...
	I1212 20:16:35.918353   91359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-069327-m04
	I1212 20:16:35.940553   91359 host.go:66] Checking if "ha-069327-m04" exists ...
	I1212 20:16:35.940868   91359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:16:35.940916   91359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-069327-m04
	I1212 20:16:35.959986   91359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/ha-069327-m04/id_rsa Username:docker}
	I1212 20:16:36.069459   91359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:16:36.083031   91359 status.go:176] ha-069327-m04 status: &{Name:ha-069327-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 node start m02 --alsologtostderr -v 5: (12.164153757s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5: (1.833324348s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.407145297s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 stop --alsologtostderr -v 5
E1212 20:17:22.896961    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 stop --alsologtostderr -v 5: (37.690267893s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 start --wait true --alsologtostderr -v 5
E1212 20:17:48.856893    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:54.973331    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:18:16.561202    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 start --wait true --alsologtostderr -v 5: (1m1.842927437s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 node delete m03 --alsologtostderr -v 5: (11.036813029s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 stop --alsologtostderr -v 5: (36.197255879s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5: exit status 7 (109.337408ms)

                                                
                                                
-- stdout --
	ha-069327
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-069327-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-069327-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:19:21.195363  106209 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:19:21.195473  106209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:19:21.195484  106209 out.go:374] Setting ErrFile to fd 2...
	I1212 20:19:21.195490  106209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:19:21.195756  106209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:19:21.195955  106209 out.go:368] Setting JSON to false
	I1212 20:19:21.196008  106209 mustload.go:66] Loading cluster: ha-069327
	I1212 20:19:21.196080  106209 notify.go:221] Checking for updates...
	I1212 20:19:21.196947  106209 config.go:182] Loaded profile config "ha-069327": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:19:21.196969  106209 status.go:174] checking status of ha-069327 ...
	I1212 20:19:21.197476  106209 cli_runner.go:164] Run: docker container inspect ha-069327 --format={{.State.Status}}
	I1212 20:19:21.215222  106209 status.go:371] ha-069327 host status = "Stopped" (err=<nil>)
	I1212 20:19:21.215242  106209 status.go:384] host is not running, skipping remaining checks
	I1212 20:19:21.215324  106209 status.go:176] ha-069327 status: &{Name:ha-069327 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:19:21.215362  106209 status.go:174] checking status of ha-069327-m02 ...
	I1212 20:19:21.215648  106209 cli_runner.go:164] Run: docker container inspect ha-069327-m02 --format={{.State.Status}}
	I1212 20:19:21.231957  106209 status.go:371] ha-069327-m02 host status = "Stopped" (err=<nil>)
	I1212 20:19:21.231980  106209 status.go:384] host is not running, skipping remaining checks
	I1212 20:19:21.231987  106209 status.go:176] ha-069327-m02 status: &{Name:ha-069327-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:19:21.232006  106209 status.go:174] checking status of ha-069327-m04 ...
	I1212 20:19:21.232314  106209 cli_runner.go:164] Run: docker container inspect ha-069327-m04 --format={{.State.Status}}
	I1212 20:19:21.257590  106209 status.go:371] ha-069327-m04 host status = "Stopped" (err=<nil>)
	I1212 20:19:21.257611  106209 status.go:384] host is not running, skipping remaining checks
	I1212 20:19:21.257619  106209 status.go:176] ha-069327-m04 status: &{Name:ha-069327-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1212 20:19:51.907004    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.957594067s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (51.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 node add --control-plane --alsologtostderr -v 5: (50.351740763s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-069327 status --alsologtostderr -v 5: (1.103477135s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (51.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.089943838s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-862668 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1212 20:22:22.896668    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-862668 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m18.595965474s)
--- PASS: TestJSONOutput/start/Command (78.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-862668 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-862668 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.5s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-862668 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-862668 --output=json --user=testUser: (1.50256919s)
--- PASS: TestJSONOutput/stop/Command (1.50s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-779676 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-779676 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.000861ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a528352-63f5-43f7-ace8-23d2900ca5a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-779676] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"991c019f-b93e-44f0-bf2e-c0f8cd1c4c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"79cd5ebb-88ed-45b6-8eea-0b44cd8fce98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3635bc77-e4a2-4c08-a79e-9a7e91b6cd10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig"}}
	{"specversion":"1.0","id":"45db288a-c522-4f9e-b427-537f7778bc29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube"}}
	{"specversion":"1.0","id":"47cf5a29-3fa8-48da-9f7f-5646ff6a98d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"23f89b88-e4c6-46b7-a37e-9368fb946d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5a955cc5-f2ea-42fd-a8e6-8af3e47cd0d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-779676" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-779676
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-959848 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-959848 --network=: (39.207681562s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-959848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-959848
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-959848: (2.147305505s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.38s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-134078 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-134078 --network=bridge: (34.699761331s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-134078" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-134078
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-134078: (2.147975306s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.88s)

                                                
                                    
x
+
TestKicExistingNetwork (32.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 20:24:09.742483    4120 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 20:24:09.758504    4120 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 20:24:09.758576    4120 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 20:24:09.758595    4120 cli_runner.go:164] Run: docker network inspect existing-network
W1212 20:24:09.775386    4120 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 20:24:09.775414    4120 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 20:24:09.775428    4120 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 20:24:09.775554    4120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 20:24:09.792734    4120 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c977eaa96b74 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:96:50:90:af:1f:c1} reservation:<nil>}
I1212 20:24:09.793016    4120 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400143fa60}
I1212 20:24:09.793037    4120 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 20:24:09.793096    4120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 20:24:09.847281    4120 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-302553 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-302553 --network=existing-network: (30.403390799s)
helpers_test.go:176: Cleaning up "existing-network-302553" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-302553
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-302553: (2.187622351s)
I1212 20:24:42.455213    4120 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.73s)

                                                
                                    
x
+
TestKicCustomSubnet (38.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-323593 --subnet=192.168.60.0/24
E1212 20:24:51.908435    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-323593 --subnet=192.168.60.0/24: (35.969304456s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-323593 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-323593" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-323593
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-323593: (2.231202975s)
--- PASS: TestKicCustomSubnet (38.23s)

                                                
                                    
x
+
TestKicStaticIP (32.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-061144 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-061144 --static-ip=192.168.200.200: (29.784028094s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-061144 ip
helpers_test.go:176: Cleaning up "static-ip-061144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-061144
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-061144: (2.151272165s)
--- PASS: TestKicStaticIP (32.11s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-714749 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-714749 --driver=docker  --container-runtime=containerd: (34.691319808s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-717359 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-717359 --driver=docker  --container-runtime=containerd: (33.027726586s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-714749
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-717359
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-717359" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-717359
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-717359: (2.501311789s)
helpers_test.go:176: Cleaning up "first-714749" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-714749
E1212 20:27:05.975696    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-714749: (2.023889055s)
--- PASS: TestMinikubeProfile (73.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-363472 --memory=3072 --mount-string /tmp/TestMountStartserial3021391311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-363472 --memory=3072 --mount-string /tmp/TestMountStartserial3021391311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.380513107s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-363472 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-365466 --memory=3072 --mount-string /tmp/TestMountStartserial3021391311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-365466 --memory=3072 --mount-string /tmp/TestMountStartserial3021391311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.362788516s)
E1212 20:27:22.896929    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountSecond (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-365466 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-363472 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-363472 --alsologtostderr -v=5: (1.705349441s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-365466 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-365466
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-365466: (1.288330661s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-365466
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-365466: (7.009496668s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-365466 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-276984 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1212 20:27:48.857399    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:29:11.922966    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-276984 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m46.152892141s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-276984 -- rollout status deployment/busybox: (3.205474152s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-rpp9h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-wbhpq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-rpp9h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-wbhpq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-rpp9h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-wbhpq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-rpp9h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-rpp9h -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-wbhpq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-276984 -- exec busybox-7b57f96db7-wbhpq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-276984 -v=5 --alsologtostderr
E1212 20:29:51.906920    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-276984 -v=5 --alsologtostderr: (26.676322135s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-276984 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp testdata/cp-test.txt multinode-276984:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile448868392/001/cp-test_multinode-276984.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984:/home/docker/cp-test.txt multinode-276984-m02:/home/docker/cp-test_multinode-276984_multinode-276984-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m02 "sudo cat /home/docker/cp-test_multinode-276984_multinode-276984-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984:/home/docker/cp-test.txt multinode-276984-m03:/home/docker/cp-test_multinode-276984_multinode-276984-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m03 "sudo cat /home/docker/cp-test_multinode-276984_multinode-276984-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp testdata/cp-test.txt multinode-276984-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile448868392/001/cp-test_multinode-276984-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984-m02:/home/docker/cp-test.txt multinode-276984:/home/docker/cp-test_multinode-276984-m02_multinode-276984.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984 "sudo cat /home/docker/cp-test_multinode-276984-m02_multinode-276984.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984-m02:/home/docker/cp-test.txt multinode-276984-m03:/home/docker/cp-test_multinode-276984-m02_multinode-276984-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m03 "sudo cat /home/docker/cp-test_multinode-276984-m02_multinode-276984-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp testdata/cp-test.txt multinode-276984-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile448868392/001/cp-test_multinode-276984-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984-m03:/home/docker/cp-test.txt multinode-276984:/home/docker/cp-test_multinode-276984-m03_multinode-276984.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984 "sudo cat /home/docker/cp-test_multinode-276984-m03_multinode-276984.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 cp multinode-276984-m03:/home/docker/cp-test.txt multinode-276984-m02:/home/docker/cp-test_multinode-276984-m03_multinode-276984-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 ssh -n multinode-276984-m02 "sudo cat /home/docker/cp-test_multinode-276984-m03_multinode-276984-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-276984 node stop m03: (1.323557237s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-276984 status: exit status 7 (556.754194ms)

                                                
                                                
-- stdout --
	multinode-276984
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276984-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-276984-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr: exit status 7 (537.404143ms)

                                                
                                                
-- stdout --
	multinode-276984
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276984-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-276984-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:30:11.379731  159171 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:30:11.379931  159171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:30:11.379958  159171 out.go:374] Setting ErrFile to fd 2...
	I1212 20:30:11.379979  159171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:30:11.380380  159171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:30:11.380630  159171 out.go:368] Setting JSON to false
	I1212 20:30:11.380678  159171 mustload.go:66] Loading cluster: multinode-276984
	I1212 20:30:11.381434  159171 config.go:182] Loaded profile config "multinode-276984": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:30:11.381482  159171 status.go:174] checking status of multinode-276984 ...
	I1212 20:30:11.382315  159171 cli_runner.go:164] Run: docker container inspect multinode-276984 --format={{.State.Status}}
	I1212 20:30:11.382664  159171 notify.go:221] Checking for updates...
	I1212 20:30:11.402714  159171 status.go:371] multinode-276984 host status = "Running" (err=<nil>)
	I1212 20:30:11.402746  159171 host.go:66] Checking if "multinode-276984" exists ...
	I1212 20:30:11.403062  159171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-276984
	I1212 20:30:11.435956  159171 host.go:66] Checking if "multinode-276984" exists ...
	I1212 20:30:11.436305  159171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:30:11.436365  159171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-276984
	I1212 20:30:11.454479  159171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/multinode-276984/id_rsa Username:docker}
	I1212 20:30:11.558729  159171 ssh_runner.go:195] Run: systemctl --version
	I1212 20:30:11.566353  159171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:11.580510  159171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:30:11.640637  159171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 20:30:11.630432744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:30:11.641274  159171 kubeconfig.go:125] found "multinode-276984" server: "https://192.168.67.2:8443"
	I1212 20:30:11.641314  159171 api_server.go:166] Checking apiserver status ...
	I1212 20:30:11.641365  159171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:30:11.653528  159171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I1212 20:30:11.662464  159171 api_server.go:182] apiserver freezer: "11:freezer:/docker/14d7c4615028ebd86eecfc58adf20a5c950b444df07ab3bd2f3e1e4a49a79f9b/kubepods/burstable/pod82c65976cda38742d59ee38e56d6606d/6c17cc8d1bb837d69dec9d4d5e033d8c6b47053ee58218fd7f9bfb6d5c4ae6f3"
	I1212 20:30:11.662535  159171 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/14d7c4615028ebd86eecfc58adf20a5c950b444df07ab3bd2f3e1e4a49a79f9b/kubepods/burstable/pod82c65976cda38742d59ee38e56d6606d/6c17cc8d1bb837d69dec9d4d5e033d8c6b47053ee58218fd7f9bfb6d5c4ae6f3/freezer.state
	I1212 20:30:11.670421  159171 api_server.go:204] freezer state: "THAWED"
	I1212 20:30:11.670452  159171 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 20:30:11.679212  159171 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 20:30:11.679242  159171 status.go:463] multinode-276984 apiserver status = Running (err=<nil>)
	I1212 20:30:11.679259  159171 status.go:176] multinode-276984 status: &{Name:multinode-276984 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:30:11.679298  159171 status.go:174] checking status of multinode-276984-m02 ...
	I1212 20:30:11.679652  159171 cli_runner.go:164] Run: docker container inspect multinode-276984-m02 --format={{.State.Status}}
	I1212 20:30:11.696616  159171 status.go:371] multinode-276984-m02 host status = "Running" (err=<nil>)
	I1212 20:30:11.696643  159171 host.go:66] Checking if "multinode-276984-m02" exists ...
	I1212 20:30:11.696961  159171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-276984-m02
	I1212 20:30:11.712957  159171 host.go:66] Checking if "multinode-276984-m02" exists ...
	I1212 20:30:11.713270  159171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:30:11.713314  159171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-276984-m02
	I1212 20:30:11.731475  159171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/multinode-276984-m02/id_rsa Username:docker}
	I1212 20:30:11.832849  159171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:30:11.845568  159171 status.go:176] multinode-276984-m02 status: &{Name:multinode-276984-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:30:11.845600  159171 status.go:174] checking status of multinode-276984-m03 ...
	I1212 20:30:11.845906  159171 cli_runner.go:164] Run: docker container inspect multinode-276984-m03 --format={{.State.Status}}
	I1212 20:30:11.863063  159171 status.go:371] multinode-276984-m03 host status = "Stopped" (err=<nil>)
	I1212 20:30:11.863088  159171 status.go:384] host is not running, skipping remaining checks
	I1212 20:30:11.863095  159171 status.go:176] multinode-276984-m03 status: &{Name:multinode-276984-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-276984 node start m03 -v=5 --alsologtostderr: (7.119000544s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-276984
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-276984
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-276984: (25.117520671s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-276984 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-276984 --wait=true -v=5 --alsologtostderr: (53.714119166s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-276984
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-276984 node delete m03: (5.039741473s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-276984 stop: (23.872204666s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-276984 status: exit status 7 (101.606257ms)

                                                
                                                
-- stdout --
	multinode-276984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-276984-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr: exit status 7 (88.680068ms)

                                                
                                                
-- stdout --
	multinode-276984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-276984-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:32:08.516257  167957 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:32:08.516374  167957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:08.516383  167957 out.go:374] Setting ErrFile to fd 2...
	I1212 20:32:08.516389  167957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:08.516650  167957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:32:08.516832  167957 out.go:368] Setting JSON to false
	I1212 20:32:08.516866  167957 mustload.go:66] Loading cluster: multinode-276984
	I1212 20:32:08.516970  167957 notify.go:221] Checking for updates...
	I1212 20:32:08.517265  167957 config.go:182] Loaded profile config "multinode-276984": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:32:08.517289  167957 status.go:174] checking status of multinode-276984 ...
	I1212 20:32:08.517814  167957 cli_runner.go:164] Run: docker container inspect multinode-276984 --format={{.State.Status}}
	I1212 20:32:08.536722  167957 status.go:371] multinode-276984 host status = "Stopped" (err=<nil>)
	I1212 20:32:08.536746  167957 status.go:384] host is not running, skipping remaining checks
	I1212 20:32:08.536759  167957 status.go:176] multinode-276984 status: &{Name:multinode-276984 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:32:08.536794  167957 status.go:174] checking status of multinode-276984-m02 ...
	I1212 20:32:08.537100  167957 cli_runner.go:164] Run: docker container inspect multinode-276984-m02 --format={{.State.Status}}
	I1212 20:32:08.555765  167957 status.go:371] multinode-276984-m02 host status = "Stopped" (err=<nil>)
	I1212 20:32:08.555786  167957 status.go:384] host is not running, skipping remaining checks
	I1212 20:32:08.555799  167957 status.go:176] multinode-276984-m02 status: &{Name:multinode-276984-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-276984 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1212 20:32:22.896360    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:32:48.857312    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-276984 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.510229811s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-276984 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-276984
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-276984-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-276984-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.164239ms)

                                                
                                                
-- stdout --
	* [multinode-276984-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-276984-m02' is duplicated with machine name 'multinode-276984-m02' in profile 'multinode-276984'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-276984-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-276984-m03 --driver=docker  --container-runtime=containerd: (32.618709997s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-276984
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-276984: exit status 80 (340.440445ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-276984 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-276984-m03 already exists in multinode-276984-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-276984-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-276984-m03: (2.061903141s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.16s)

                                                
                                    
x
+
TestPreload (120.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-927081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1212 20:34:34.975555    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-927081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (57.880114928s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-927081 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-927081 image pull gcr.io/k8s-minikube/busybox: (2.27444899s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-927081
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-927081: (5.891415204s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-927081 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1212 20:34:51.908362    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-927081 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (52.194981773s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-927081 image list
helpers_test.go:176: Cleaning up "test-preload-927081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-927081
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-927081: (2.505851943s)
--- PASS: TestPreload (120.99s)

                                                
                                    
x
+
TestScheduledStopUnix (105.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-762369 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-762369 --memory=3072 --driver=docker  --container-runtime=containerd: (28.992165328s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762369 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:36:12.332097  183797 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:36:12.332307  183797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:36:12.332335  183797 out.go:374] Setting ErrFile to fd 2...
	I1212 20:36:12.332364  183797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:36:12.332681  183797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:36:12.333024  183797 out.go:368] Setting JSON to false
	I1212 20:36:12.333219  183797 mustload.go:66] Loading cluster: scheduled-stop-762369
	I1212 20:36:12.333662  183797 config.go:182] Loaded profile config "scheduled-stop-762369": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:36:12.333795  183797 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/config.json ...
	I1212 20:36:12.334027  183797 mustload.go:66] Loading cluster: scheduled-stop-762369
	I1212 20:36:12.334199  183797 config.go:182] Loaded profile config "scheduled-stop-762369": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-762369 -n scheduled-stop-762369
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762369 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:36:12.781359  183886 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:36:12.781559  183886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:36:12.781585  183886 out.go:374] Setting ErrFile to fd 2...
	I1212 20:36:12.781605  183886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:36:12.781984  183886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:36:12.782313  183886 out.go:368] Setting JSON to false
	I1212 20:36:12.783923  183886 daemonize_unix.go:73] killing process 183813 as it is an old scheduled stop
	I1212 20:36:12.784016  183886 mustload.go:66] Loading cluster: scheduled-stop-762369
	I1212 20:36:12.787705  183886 config.go:182] Loaded profile config "scheduled-stop-762369": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:36:12.787812  183886 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/config.json ...
	I1212 20:36:12.788061  183886 mustload.go:66] Loading cluster: scheduled-stop-762369
	I1212 20:36:12.788181  183886 config.go:182] Loaded profile config "scheduled-stop-762369": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 20:36:12.793595    4120 retry.go:31] will retry after 75.343µs: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.793798    4120 retry.go:31] will retry after 101.993µs: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.795307    4120 retry.go:31] will retry after 266.467µs: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.796432    4120 retry.go:31] will retry after 333.569µs: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.797491    4120 retry.go:31] will retry after 729.335µs: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.798573    4120 retry.go:31] will retry after 1.040687ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.799693    4120 retry.go:31] will retry after 1.447036ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.801835    4120 retry.go:31] will retry after 1.369832ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.803986    4120 retry.go:31] will retry after 3.730263ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.808402    4120 retry.go:31] will retry after 3.480744ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.812618    4120 retry.go:31] will retry after 6.264697ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.819823    4120 retry.go:31] will retry after 12.944316ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.833009    4120 retry.go:31] will retry after 10.316002ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.844241    4120 retry.go:31] will retry after 12.171724ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.857555    4120 retry.go:31] will retry after 27.880947ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
I1212 20:36:12.885789    4120 retry.go:31] will retry after 35.439011ms: open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762369 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-762369 -n scheduled-stop-762369
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-762369
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762369 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:36:38.705624  184565 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:36:38.705728  184565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:36:38.705735  184565 out.go:374] Setting ErrFile to fd 2...
	I1212 20:36:38.705741  184565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:36:38.706061  184565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:36:38.706341  184565 out.go:368] Setting JSON to false
	I1212 20:36:38.706446  184565 mustload.go:66] Loading cluster: scheduled-stop-762369
	I1212 20:36:38.707055  184565 config.go:182] Loaded profile config "scheduled-stop-762369": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 20:36:38.707142  184565 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/scheduled-stop-762369/config.json ...
	I1212 20:36:38.707445  184565 mustload.go:66] Loading cluster: scheduled-stop-762369
	I1212 20:36:38.707647  184565 config.go:182] Loaded profile config "scheduled-stop-762369": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1212 20:37:22.896765    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-762369
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-762369: exit status 7 (69.454505ms)

                                                
                                                
-- stdout --
	scheduled-stop-762369
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-762369 -n scheduled-stop-762369
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-762369 -n scheduled-stop-762369: exit status 7 (65.262484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-762369" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-762369
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-762369: (5.153884275s)
--- PASS: TestScheduledStopUnix (105.72s)

                                                
                                    
x
+
TestInsufficientStorage (12.54s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-922453 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-922453 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.965117938s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36c3cd86-3d0e-49c3-b474-cb3603433568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-922453] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"21c0d844-5803-4014-9d4e-0e73dc053e68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"60a9272e-9cc2-4ebe-9f70-a46f94c2a4c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f0cfb66-bb56-4986-bbaf-6eb8bfd4df9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig"}}
	{"specversion":"1.0","id":"378c91d1-b2da-4ea7-bcb7-12a42aad5bea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube"}}
	{"specversion":"1.0","id":"cbd930a7-c440-4c8b-bb36-780d8bc8856a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4d82a8a4-bf08-4be9-9dbf-b314d3f46bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d64294a6-1790-46b6-9fd2-689203113720","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"729b7877-6c86-42b0-9a7f-651388bf3797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b1b181f3-84ee-422e-b266-a5970164c5ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"11b58962-d259-4b45-a151-5aeef53b5e68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"19553097-35af-4713-a5a7-2801ca19a59b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-922453\" primary control-plane node in \"insufficient-storage-922453\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfe0b731-50ad-47f5-88ff-83bfd9dc545c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765505794-22112 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef8b04c6-4dea-43d2-ac3f-8beae82d8baf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"308dbe12-bd80-4799-8b46-4ce57cbb782a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-922453 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-922453 --output=json --layout=cluster: exit status 7 (300.094006ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-922453","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-922453","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:37:39.258020  186206 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-922453" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-922453 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-922453 --output=json --layout=cluster: exit status 7 (295.850919ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-922453","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-922453","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:37:39.553767  186275 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-922453" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
	E1212 20:37:39.564474  186275 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/insufficient-storage-922453/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-922453" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-922453
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-922453: (1.974401329s)
--- PASS: TestInsufficientStorage (12.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (315.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.393258338 start -p running-upgrade-301194 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.393258338 start -p running-upgrade-301194 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.633810514s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-301194 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 20:45:51.925245    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:47:22.897059    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:47:48.858286    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:49:51.906711    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-301194 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.564804399s)
helpers_test.go:176: Cleaning up "running-upgrade-301194" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-301194
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-301194: (1.994243054s)
--- PASS: TestRunningBinaryUpgrade (315.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.206753679 start -p missing-upgrade-222094 --memory=3072 --driver=docker  --container-runtime=containerd
E1212 20:37:48.856690    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.206753679 start -p missing-upgrade-222094 --memory=3072 --driver=docker  --container-runtime=containerd: (1m0.73370744s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-222094
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-222094
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-222094 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-222094 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.674814711s)
helpers_test.go:176: Cleaning up "missing-upgrade-222094" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-222094
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-222094: (2.6166483s)
--- PASS: TestMissingContainerUpgrade (123.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-640369 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-640369 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (95.463877ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-640369] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-640369 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-640369 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.874444367s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-640369 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-640369 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-640369 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.503366472s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-640369 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-640369 status -o json: exit status 2 (330.334888ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-640369","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-640369
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-640369: (2.465383519s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-640369 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-640369 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.669406412s)
--- PASS: TestNoKubernetes/serial/Start (7.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-640369 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-640369 "sudo systemctl is-active --quiet service kubelet": exit status 1 (348.360683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-640369
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-640369: (2.688322904s)
--- PASS: TestNoKubernetes/serial/Stop (2.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-640369 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-640369 --driver=docker  --container-runtime=containerd: (7.38023514s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-640369 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-640369 "sudo systemctl is-active --quiet service kubelet": exit status 1 (355.029804ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.692008758 start -p stopped-upgrade-404862 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1212 20:39:51.908172    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.692008758 start -p stopped-upgrade-404862 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (38.266230156s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.692008758 -p stopped-upgrade-404862 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.692008758 -p stopped-upgrade-404862 stop: (1.272649434s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-404862 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 20:42:22.896874    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:42:48.856758    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:43:45.976948    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:44:51.907144    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-404862 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.251826519s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (306.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-404862
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-404862: (2.12280706s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                    
x
+
TestPause/serial/Start (59.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-905735 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-905735 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (59.956037977s)
--- PASS: TestPause/serial/Start (59.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-905735 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 20:51:14.976896    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-905735 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.251504087s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-905735 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-905735 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-905735 --output=json --layout=cluster: exit status 2 (325.507686ms)

                                                
                                                
-- stdout --
	{"Name":"pause-905735","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-905735","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-905735 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-905735 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-905735 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-905735 --alsologtostderr -v=5: (2.769396619s)
--- PASS: TestPause/serial/DeletePaused (2.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-905735
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-905735: exit status 1 (17.87031ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-905735: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-455251 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-455251 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (193.163192ms)

                                                
                                                
-- stdout --
	* [false-455251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:52:03.137384  244125 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:52:03.137511  244125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:52:03.137531  244125 out.go:374] Setting ErrFile to fd 2...
	I1212 20:52:03.137536  244125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:52:03.137784  244125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
	I1212 20:52:03.138200  244125 out.go:368] Setting JSON to false
	I1212 20:52:03.139051  244125 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5673,"bootTime":1765567051,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1212 20:52:03.139125  244125 start.go:143] virtualization:  
	I1212 20:52:03.142666  244125 out.go:179] * [false-455251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:52:03.146570  244125 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:52:03.146748  244125 notify.go:221] Checking for updates...
	I1212 20:52:03.152657  244125 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:52:03.155571  244125 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
	I1212 20:52:03.158436  244125 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
	I1212 20:52:03.161431  244125 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:52:03.164311  244125 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:52:03.167913  244125 config.go:182] Loaded profile config "kubernetes-upgrade-016181": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 20:52:03.168029  244125 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:52:03.203330  244125 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:52:03.203500  244125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:52:03.263303  244125 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:52:03.254246807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:52:03.263409  244125 docker.go:319] overlay module found
	I1212 20:52:03.266531  244125 out.go:179] * Using the docker driver based on user configuration
	I1212 20:52:03.269320  244125 start.go:309] selected driver: docker
	I1212 20:52:03.269349  244125 start.go:927] validating driver "docker" against <nil>
	I1212 20:52:03.269362  244125 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:52:03.272897  244125 out.go:203] 
	W1212 20:52:03.275818  244125 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1212 20:52:03.278773  244125 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-455251 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-455251" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:40:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-016181
contexts:
- context:
cluster: kubernetes-upgrade-016181
user: kubernetes-upgrade-016181
name: kubernetes-upgrade-016181
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-016181
user:
client-certificate: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.crt
client-key: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-455251

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-455251"

                                                
                                                
----------------------- debugLogs end: false-455251 [took: 3.236884284s] --------------------------------
helpers_test.go:176: Cleaning up "false-455251" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-455251
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m19.422155164s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-455251 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-455251 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-chpqt" [2d4d6082-fe9a-4a70-8aff-175a95230dbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-chpqt" [2d4d6082-fe9a-4a70-8aff-175a95230dbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005052051s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m19.22067856s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-ctjhq" [90d0aa11-b962-4839-9b00-1969edf57bdc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003706501s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-455251 "pgrep -a kubelet"
I1212 21:19:09.335994    4120 config.go:182] Loaded profile config "kindnet-455251": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-455251 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rmb9h" [901877b8-b81a-458e-8746-ebf7ecfeb48a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rmb9h" [901877b8-b81a-458e-8746-ebf7ecfeb48a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003087642s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m1.032444913s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-lgdkd" [a60a6812-9177-40a3-b100-701616cca779] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-lgdkd" [a60a6812-9177-40a3-b100-701616cca779] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003653935s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-455251 "pgrep -a kubelet"
I1212 21:20:47.026407    4120 config.go:182] Loaded profile config "calico-455251": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-455251 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wvbqk" [5b1d4e04-7d9c-425e-b488-1d34d326a776] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wvbqk" [5b1d4e04-7d9c-425e-b488-1d34d326a776] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003964528s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.330147919s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-455251 "pgrep -a kubelet"
I1212 21:22:16.498016    4120 config.go:182] Loaded profile config "custom-flannel-455251": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-455251 replace --force -f testdata/netcat-deployment.yaml
E1212 21:22:16.504098    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/auto-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nmmgf" [23576e4f-a5a8-4206-a4f5-4c07504d65d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nmmgf" [23576e4f-a5a8-4206-a4f5-4c07504d65d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003953416s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m14.33057186s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-455251 "pgrep -a kubelet"
I1212 21:24:01.367352    4120 config.go:182] Loaded profile config "enable-default-cni-455251": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-455251 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wrdpn" [d38044f5-7fa2-436b-8df4-d57d06f83261] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wrdpn" [d38044f5-7fa2-436b-8df4-d57d06f83261] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003668335s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.220238735s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-m96fv" [73abbc70-f253-4290-884f-046d005d77fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004531132s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-455251 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-455251 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6gcgm" [17159b02-405b-4559-914e-8fedae505f30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6gcgm" [17159b02-405b-4559-914e-8fedae505f30] Running
E1212 21:25:40.715703    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:25:40.722220    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:25:40.733603    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:25:40.754993    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:25:40.796382    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:25:40.877811    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/calico-455251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003186828s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-455251 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m11.191230977s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-455251 "pgrep -a kubelet"
I1212 21:27:19.910899    4120 config.go:182] Loaded profile config "bridge-455251": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-455251 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-sdb6j" [798aecb0-3072-4051-a691-ede917b75851] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-sdb6j" [798aecb0-3072-4051-a691-ede917b75851] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003219952s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-455251 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-455251 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (37/369)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.4
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
392 TestNetworkPlugins/group/kubenet 3.81
400 TestNetworkPlugins/group/cilium 3.81
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-519095 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-519095" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-519095
--- SKIP: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-455251 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-455251" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:40:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-016181
contexts:
- context:
cluster: kubernetes-upgrade-016181
user: kubernetes-upgrade-016181
name: kubernetes-upgrade-016181
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-016181
user:
client-certificate: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.crt
client-key: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-455251

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-455251"

                                                
                                                
----------------------- debugLogs end: kubenet-455251 [took: 3.642686008s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-455251" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-455251
--- SKIP: TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-455251 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-455251" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:40:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-016181
contexts:
- context:
cluster: kubernetes-upgrade-016181
user: kubernetes-upgrade-016181
name: kubernetes-upgrade-016181
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-016181
user:
client-certificate: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.crt
client-key: /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/kubernetes-upgrade-016181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-455251

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-455251" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-455251"

                                                
                                                
----------------------- debugLogs end: cilium-455251 [took: 3.652776945s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-455251" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-455251
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
Copied to clipboard